InformationWeek
InformationWeek
News and Analysis Tech Leaders Trust
  • 1 Sassen gefällt das
  • 85 Beiträge
  • 2 Fotos
  • 0 Videos
  • 0 Vorschau
  • Science &Technology
Search
Jüngste Beiträge
  • WWW.INFORMATIONWEEK.COM
    How AI is Revolutionizing Photography
    John Edwards, Technology Journalist & AuthorNovember 22, 20245 Min ReadAlessandro Grandini via Alamy Stock PhotoAIrevolutionizes just about everything. Photography is no exception.AI is a powerful tool, says Conor Gay, vice president of business operations at MarathonFoto, a firm specializing in marathon race photography. When used appropriately, it can enhance great photography and create incredible designs, he explains in an email interview. "When used carelessly, it can cause confusion, misinformation, or just plain ruin a photo."AI helps photographers realize a creative vision, observes John McNeil, founder and CEO of John McNeil Studio, a San Francisco-area based creative firm. "It's an incredibly powerful tool, helping even less-than-professional photographers create more professional images," he notes in an online interview. "Features such as exposure correction, auto enhance, and auto skin tone, allow just about anyone to take great pictures."Johnny Wolf, founder and lead photographer at Johnny Wolf Studio, a New York-based corporate photography studio, says that AI allows him to explore complex concepts in pre-production and create realistic mockups for client approval, all without even having to touch a camera. "It gives me the ability to quickly test and iterate on ideas without having to invest time and resources," he explains via email. "This results in a more focused discovery phase with clients and leads to fewer revisions during the editing process."Related:Efficiency and QualityAI tools enable greater efficiency and higher quality when capturing images, automatically detecting subjects, optimizing an image at the moment it's taken, says Chris Zacharias, founder and CEO of visual image studio Imgix. AI tools can identify subjects and objects within an image to allow greater precision in editing," he notes in an email interview. "We can remove unwanted elements or introduce new ones into a photograph in pursuit of a creative vision."Wolf says that AI's greatest impact has been automating the mundane. "Basic tasks, like whitening a subject's teeth, or cloning-out distracting background elements, used to involve a time-consuming masking process, which can now be done with one click," he explains. "With AI handling the drudgery of post-production, I'm free to dedicate more time and energy into creative exploration, improving my craft and delivering a more personalized and impactful final product."AI has allowed us to identify images faster and more accurately than ever before, Gay says. "In the past two years, we've been able to get more images into runners' galleries, typically within 24 hours of their finish," he notes. "AI has also allowed us to capture more unique shots and angles."Related:Gay adds that AI can also capture relevant photo data that can be used by race partners and sponsors. "We're now able to identify sponsor-branding that appears in our photos, and even capture data around apparel and footwear." The technology is also used to enhance images. "We see different weather and lighting conditions throughout the day," he notes. "AI allows us to enhance these images to their highest quality."AI's power, control, flexibility, and possibilities are absolutely incredible, McNeil states. "Photoshop was a game changer 30 years ago, and in less than three years, AI makes things like histograms and layers seem positively quaint."The DownsideAI's ethical implications are significant, and will require discussion, consideration, and action by a wide range of stakeholders and organizations, Zacharias says. "There's much to consider, and the impacts are already being felt."Maintaining authenticity is a top concern, Gay says. "Especially in our industry, runners work tirelessly to complete their races," he notes. "The idea of someone being able to create a fake finish line moment with AI discredits the hard work each athlete puts into their race." Gay says his goal is to document runners' journeys on race day and to be as accurate as possible.Related:McNeil worries that there may now be too much reliance on AI. "The term 'well fix it in post' used to be a lazy joke people would make on set," he says. "Today, it's literally the process." Yet such an attitude can lead to images that are poorly crafted, uninventive, and looking like they were generated by AI. "Ultimately, as creative people and artists, we need to be more critical about the work we're putting into the world."While photo manipulation is nothing new, AI's ability to instantly generate photography that's indistinguishable from reality has led to a frightening inflection point, Wolf warns. "Anyone with an agenda and a web browser can now create and disseminate AI-generated propaganda as a real-time response to events," he explains. "If society can no longer trust photos as evidence of truth, we'll retreat further into our echo chambers and consume content that has been generated to reinforce our views."Looking ForwardArtists have always adapted and leveraged new tools and technologies to create novel forms of self-expression, Zacharias says. "The coming years will see a lot of discussion about what is real or authentic," he notes. "At the end of the day, AI is and will continue to be a tool, and it is we humans who will define what the soul of the medium is."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Kommentare 0 Anteile 6 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Innovation Relies on Safeguarding AI Technology to Mitigate its Risks
    Brandon Taylor, Digital Editorial Program ManagerNovember 22, 20245 Min ViewAs artificial intelligence (AI) continues to advance and be adopted at a blistering pace, there are many ways AI systems can be vulnerable to attacks. Whether being fed malicious data that enables incorrect decisions or being hacked to gain access to sensitive data and more, there are no shortage of challenges in this growing landscape.Today, it's more vital than ever to consider taking steps to ensure that generative AI models, applications, data, and infrastructure are protected.In this archived panel discussion, Sara Peters (upper left in video), InformationWeeks editor-in-chief; Anton Chuvakin (upper right), senior staff security consultant, office of the CISO, for Google Cloud; and Manoj Saxena (lower middle), CEO and executive chairman of Trustwise AI, came together to discuss the importance of applying rigorous security to AI systems.This segment was part of our live virtual event titled, State of AI in Cybersecurity: Beyond the Hype. The event was presented by InformationWeek and Dark Reading on October 30, 2024.A transcript of the video follows below. Minor edits have been made for clarity.Sara Peters: All right, so let's start here. The topic is securing AI systems, and that can mean a lot of different things. It can mean cleaning up the data quality of the model training data or finding vulnerable code in the AI models.Related:It can also mean detecting hallucinations, avoiding IP leaks through generative AI prompts, detecting cyber-attacks, or avoiding network overloads. It can be a million different things. So, when I say securing AI systems, what does that mean to you?What are the biggest security risks or threats that we need to be thinking about right now? Manoj, I'll send that to you first.Manoj Saxena: Sure, again, thanks for having me on here. Securing AI broadly, I think, means taking a proactive approach not only to the outside-in view of security, but also the inside-out view of security. Because what we're entering is this new world that I call prompt to x. Today, it's prompt to intelligence.Tomorrow, it will be prompt to action through an agent. The day after tomorrow, it will be prompt to autonomy, where you will tell an agent to take over a process. So, what we are going to see in terms of securing AI are the external vectors that are going to be coming into your data, applications and networks.They're going to get amplified because of AI. People will start using AI to create new threat vectors outside-in, but also, there will be a tremendous number of inside-out threat vectors that will be going out.Related:This could be a result of employees not knowing how to use the system properly, or the prompts may end up creating new security risks like sensitive data leakage, harmful outputs or hallucinated output. So, in this environment, securing AI would mean proactively securing outside-in threats as well as inside-out threats.Anton Chauvkin: So, to add to this, we build a lot of structure around this. So, I will try to answer without disagreeing with Manoj, but by adding some structure. Sometimes I joke that it's my 3am answer if somebody says, Anton secure AI! What do you mean by this? I'll probably go to the model that we built.Of course, that's part of our safe, secure AI framework approach. When I think about securing AI, I think about models, applications, infrastructure and data. Unfortunately, it's not an acronym, because the acronym would be MADE, and it'll be really strange.But after somebody said it's not an acronym, obviously, everybody immediately thought it's an acronym. The more serious take on this is that if I say securing AI, I think about securing the model, the applications around it, the infrastructure under it, and the data inside it.I probably won't miss anything that's within the cybersecurity domain, if I think about these four buckets. Ultimately, I've seen a lot of people who obsess about one, and all sorts of hilarious and sometimes sad results happen. So, for example, I go and say the model is the most important, and I double down on prompt injection.Related:Then, SQL injection into my application kills me. If I don't want to do it in the cloud for some reason, and I try to do it on premise, my infrastructure is let go. My model is fine, my application is great, but my infrastructure is let go. So, ultimately, these four things are where my mind goes when I think about securing AI systems.MS: Can I just add to that? I think that's a good way to look at the stack and the framework. I would add one more piece to it, which is around the notion of securing the prompts. This is prompt security and filtering, prompt defense against adversarial attacks, as well as real time prompt validation.You're going to be securing the prompt itself. Where do you think that fits in?AC: We always include it in the model, because ultimately, the prompt issues to us are AI specific issues. Nothing in the application infrastructure data is AI specific, because these exist, obviously, for non-applications. For us, when we talk about prompt, it always sits inside the M part of the model.SP: So, Google's secure AI framework is something that we can all look for and read. It's a thorough and interesting read, and I recommend to our audience to do that later. But you guys have just covered a wide variety of different things already when I asked the first question.So, if I'm a CIO or a CISO, what should I be evaluating? How do I evaluate the security of a new AI tool during the procurement phase when you have just given me all these different things to try to evaluate? Anton, why don't you start with that one?Watch the archived State of AI in Cybersecurity: Beyond the Hype live virtual event on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Kommentare 0 Anteile 6 Ansichten
  • WWW.INFORMATIONWEEK.COM
    The New Cold War: US Urged to Form Manhattan Project for AGI
    Shane Snider, Senior Writer, InformationWeekNovember 21, 20245 Min ReadIvan Marc Sanchez via Alamy StockA bi-partisan US congressional group this week released a report urging a Manhattan Project style effort to develop AI that will be able to outthink humans before China can win the AI arms race.The US-China Economic and Security Review Commission outlined the challenges and threats facing the US as powerful AI systems continue to quickly proliferate. The group calls for the government to fund and collaborate with private tech firms to quickly develop artificial general intelligence (AGI).The Manhattan Project was the historic collaboration between government and the private sector during World War II that culminated in the development of the first atomic bombs, which the US infamously unleashed on Japan. The subsequent proliferation of nuclear weapons led to an arms race and policy of mutually assured destruction that has so far deterred wartime use, but sparked the Cold War between the United States and Russia.While the Cold War with Russia ultimately ended in 1991, the nuclear stalemate caused by the arms pileup remains.A new stalemate may be brewing as superpowers race to develop AGI, which ethicists warn could present an existential threat to humanity. Many have likened such a race to the plot of the Terminator movie, where the fictional company Cyberdyne Systems works with the US government to achieve a type of AGI that ultimately leads to a nuclear catastrophe.Related:The commissions report doesnt sugarcoat the possibilities. The United States is locked in a long-term strategic competition with China to shape the rapidly evolving global technological landscape, according to the report. The rise in emerging tech like AI could alter the character of warfare and for the country winning the race, would tip the balance of power in its favor and reap economic benefits far into the 21st century.AI Effort in China ExpandsChinas State Council in 2017 unveiled its New Artificial Intelligence Development Plan, aiming to become the global leader in AI by 2030.The US still has an advantage, with more than 9,500 AI companies compared to Chinas nearly 2,000 companies. Private investment in the US dwarfs Chinas effort, with $605 billion invested, compared to Chinas $86 billion, according to a report from the non-profit Information Technology & Innovation Foundation.But Chinas government has poured a total of $184 million into AI research, including facial recognition, natural language processing, machine learning, deep learning, neural networks, robotics, automation, computer vision, data science, and cognitive computing.Related:While four US large language models (LLMs) sat on top of performance charts in April 2024, by June, only OpenAIs GPT-4o and Claude 3.5 remained on top. The next five models were all from China-backed companies.The gap between the leading models from the US industry leaders and those developed by Chinas foremost tech giants and start-ups is quickly closing, the report says.Where the US Should FocusThe report details areas that could make the biggest impact on the AI arms race where the US currently has an advantage, including advanced semiconductors, compute and cloud, AI models, and data. But China, the report contends, is making progress by subsidizing emerging technologies.The group recommends a priority on AI defense development for national security, with contracting authority given to the executive branch. The commission urges US Congress to establish and fund the program, with the goal of winning the AGI development race.The report also recommends banning certain technologies controlled by China, including autonomous humanoid robots, and products that could impact critical infrastructure. US policy has begun to shift to recognize the importance of competition with China over these critical technologies, the report states.Related:Manoj Saxena, CEO and founder of Responsible AI Institute and InformationWeek Insight Circle member, says the power of AGI should not be underestimated as countries race toward innovation.One issue is rushing to develop AGI just to win a tech race and not understanding the unintended consequences that these AI systems could create, he says. it could create a situation where we cannot control things, because we are accelerating without understanding what the AGI win would look like.Saxena says the AGI race may result in the need for another Geneva Convention, the global war treaties and humanitarian guidance that were greatly expanded after World War II.But Saxena says a public-private collaboration may lead to better solutions. As a country, were going to get not just the best and brightest minds working on this, most of which are in the private sector, but we will also get wider perspectives on ethical issues and potential harm and unintended consequences.An AI Disaster in the Making?Small actors have limited access to the tightly controlled materials needed to make a nuclear weapon. AI, on the other hand, enjoys a relatively open and democratized environment. Ethicists worry that ease of access to powerful and potentially dangerous systems may widen the threat landscape.RAI Institutes Saxena says weaponization of AI is already occurring, and it might take a catastrophic event to push all parties to the table. I think there is going to be some massive issues around AI going rogue, around autonomous weapon attacks that go out of control somewhere Unfortunately, civilization progresses through a combination of regulations, enforcement, and disasters.But in the case of AI, regulations are far behind, he says. Enforcements are also far behind, and it's more likely than not that there will be some disasters that will make us wake up and have some type of framework to limit these things.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Kommentare 0 Anteile 26 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Does the US Government Have a Cybersecurity Monoculture Problem?
    Carrie Pallardy, Contributing ReporterNovember 21, 20244 Min ReadSOPA Images Limited via Alamy Stock PhotoThe way Microsoft provided the US government with cybersecurity upgrades is under scrutiny. ProPublica published a report that delves into the White House Offer: a deal in which Microsoft sent consultants to install cybersecurity upgrades for free. But those free product upgrades were only covered for up to one year.Did this deal give Microsoft an unfair advantage, and what could it take to shift the federal governments reliance on the tech giants services?The White House OfferProPublica spoke to eight former Microsoft employees that played a part in the White House Offer. With their insight, the ProPublicas report details how this deal makes it difficult for users in the federal government to shift away from Microsofts products and how it helped to squeeze out competition.While the cybersecurity upgrades were initially free, government agencies need to pay come renewal time. After the installation of the products and employee training, switching to alternatives would be costly.ProPublica also reports that Microsoft salespeople recommended that federal agencies drop products from competitors to save costs.Critics raise concerns that Microsofts deal skirted antitrust laws and federal procurement laws.Why didn't you allow a Deloitte or an Accenture or somebody else to say we want free services to help us do it? Why couldn't they come in and do the same thing? If a company is willing to do something for free like that, why should it be a bias to Microsoft and not someone else that's capable as well? asks Morey Haber, chief security advisor at BeyondTrust, an identity and access security company. Related:ProPublica noted Microsofts defense of its deal and the way it worked with the federal government. Microsoft declined to comment when InformationWeek reached out.Josh Bartolomie, vice president of global threat services at email security company Cofense, points out that the scale of the federal government makes Microsoft a logical choice.The reality of it is there are no other viable platforms that offer the extensibility, scalability, manageability other than Microsoft, he tells InformationWeek.The Argument for DiversificationOverreliance on a single security vendor has its pitfalls. Generally speaking, you don't want to do a sole provider for any type of security services. You want to have checks and balances. You want to have risk mitigations. You want to have fail safes, backup plans, says Bartolomie.And there are arguments being made that Microsoft created a cybersecurity monoculture within the federal government.Related:Sen. Eric Schmitt (R-Mo.) and Sen. Ron Wyden (D-Ore.) raised concerns and called for a multi-vendor approach.DoD should embrace an alternate approach, expanding its use of open-source software and software from other vendors, that reduces risk-concentration to limit the blast area when our adversaries discover an exploitable security flaw in Microsofts, or another companys software, they wrote in a letter to John Sherman, former CIO of the Department of Defense.The government has experienced the fallout that follows exploited vulnerabilities. A Microsoft vulnerability played a role in the SolarWinds hack.Earlier this year it was disclosed that Midnight Blizzard, a Russian state-sponsored threat group,executed a password spray attack against Microsoft. Federal agency credentials were stolen in the attack, according to Cybersecurity Dive.There is proof out there that the monoculture is a problem, says Haber.PushbackMicrosofts dominance in the government space has not gone unchallenged over the years. For example, the Department of Defense pulled out of a $10 billion cloud deal with Microsoft. The contract, the Joint Enterprise Defense Infrastructure (JEDI), faced legal challenges from competitor AWS.Related:Competitors could continue to challenge Microsofts dominance in the government, but there are still questions about the cost associated with replacing those services.I think the government has provided pathways for other vendors to approach, but I think it would be difficult to displace them, says Haber.A New AdministrationCould the incoming Trump administration herald changes in the way the government works with Microsoft and other technology vendors?Each time a new administration steps in, Bartolomie points out that there is a thirst for change. Do I think that there's a potential that he [Trump] will go to Microsoft and say, Give us better deals. Give us this, give us that? That's a high possibility because other administrations have, he says. The government being one of the largest customers of the Microsoft ecosystem also gives them leverage.Trump has been vocal about his America First policy, but how that could be applied to cybersecurity services used by the government remains to be seen. Do you allow software being used from a cybersecurity or other perspective to be developed overseas? asks Haber.Haber points out that outsourced development is typical for cybersecurity companies. I'm not aware of any cybersecurity company that does exclusive US or even North America builds, he says.Any sort of government mandate requiring cybersecurity services developed solely in the US would raise challenges for Microsoft and the cybersecurity industry as a whole.While the administrations approach to cybersecurity and IT vendor relationships is not yet known, it is noteworthy that Trumps view of tech companies could be influential. Amazon pursued legal action over the $10 billion JEDI contract, claiming that Trumps dislike of company founder Jeff Bezos impacted its ability to secure the deal, The New York Times reports.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Kommentare 0 Anteile 29 Ansichten
  • WWW.INFORMATIONWEEK.COM
    The Evolution of IT Job Interviews: Preparing for Skills-Based Hiring
    In recent years, IT job interviews have undergone a significant transformation. The traditional model, characterized by casual face-to-face conversations and subjective evaluations, is gradually being replaced by a more structured, skills-focused approach. This shift reflects a broader change in how organizations value and assess talent, moving away from an overemphasis on degrees in favor of the candidate's actual abilities and accomplishments.Major technology companies like Google, IBM, and Comcast have signed the Tear the Paper Ceiling initiative, signaling a significant change in hiring practices across various industries. In part, these companies are reacting to ongoing IT skill gaps, which IDC predicts will be responsible for more than $5.5 trillion in losses by 2026, causing significant harm to 90% of companies. Especially in an age where online resources for obtaining technical skills are so widely available, this shift will open job opportunities for candidates who possess the capabilities to perform well but lack a degree.The Rise of Structured InterviewsAs the emphasis shifts toward skills-based hiring, the interview process itself is evolving. HR departments are increasingly adopting structured interviews, recognizing their effectiveness in predicting job performance and employee retention compared to less formal traditional approaches.Related:Effectively structured interviews employ consistency in questioning across all candidates for a given position, and these questions focus on real-world applications of skills and achieved results. Structured interviews are most predictive of job performance when conducted by a panel of trained interviewers, and, after the interview is done, each panelist evaluates the candidate using standardized evaluation criteria before they come to a consensus.Preparing for the New Interview LandscapeAs job seekers navigate this evolving landscape, it's important to prepare for skills-based interviews. Here are some key things to consider:1. Analyze the job description: The job description serves as a roadmap for interview preparation. Carefully dissect both explicit and implicit skill requirements, using this information to guide their preparation.2. Brush up on technical proficiency: With the increased likelihood of technical or skills-based questions during the interview process, be prepared to demonstrate your technical abilities that are relevant for the job in real-time. This might entail solving coding challenges or troubleshooting complex scenarios relevant to the role.Related:3. Develop a repertoire of skills stories: Prepare a collection of compelling examples that illustrate how youve applied your skills to achieve results in the past like those that will be required on the job to which you are applying. Dont forget so-called soft skills. Companies are placing an increased emphasis on these for technical positions, so make sure to highlight your experience applying skills like planning, interpersonal communication, teamwork, and problem-solving to overcome challenges or achieve a goal.4. Align with organizational values: Understanding and demonstrating alignment with a companys culture and core values has become increasingly important. Research the organization's ethos and prepare concrete examples from your professional experience that reflect these values.5. Highlight individual contributions: In skills-based interviews, its not enough to simply be part of a successful team. Interviewers want to understand your specific role and contributions to solving problems or achieving goals. When discussing accomplishments, focus on what you contributed to the teams success, the methods and approaches you employed, and the quantifiable outcomes that resulted from these efforts.Related:The Implications of Skills-Based HiringThe shift toward skills-based hiring has far-reaching implications for both job seekers and employers. For candidates, it means a greater emphasis on demonstrating tangible technical and soft skills, including the impact candidates have had, rather than relying solely on what degrees they possess. This approach can level the playing field by allowing individuals to showcase their capabilities regardless of their educational background or prior career path.For employers, skills-based hiring offers the potential for more diverse and capable teams. By focusing on competencies rather than degrees, organizations can tap into a broader talent pool and potentially identify great candidates who would have been arbitrarily rejected in the past because they didnt have a computer science or engineering degree.Embracing the Future of HiringAs we move further into the era of skills-based hiring, both IT job seekers and employers must adjust their approaches. For candidates, this means shifting focus from degrees to capabilities and preparing to demonstrate their core skills and results during the interview process. Its no longer just about having a polished resume; its about being ready to show what you can do.For organizations, the challenge lies in developing robust, fair, and effective skills-based hiring processes. This may involve rethinking job requirements, redesigning interview processes, and investing in new assessment tools.Ultimately, the evolution of job interviews reflects a broader shift in how we value and assess talent in the modern workplace. By embracing these changes and preparing accordingly, both candidates and employers can navigate the workplace more effectively, leading to better matches between individuals and roles, and ultimately, more successful and satisfying professional relationships.
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Help Wanted: IT Hiring Trends in 2025
    Lisa Morgan, Freelance WriterNovember 20, 20248 Min ReadEgor Kotenko via Alamy Stock Digital transformation changed the nature of the IT/business partnership. Specifically, IT has become a driving force in reducing operating costs, making the workforce more productive and improving value streams. These shifts are also reflected in the way IT is structured."When it comes to recruiting and attracting IT talent, it is time for IT leadership to shine. Their involvement in the process needs to be much more active to find the resources that teams need right now. And more than anything, its not the shiny new roles we are struggling to hire for. Its [the] on-prem network engineer and cloud architect you need to drive business outcomes right now. Its the cybersecurity analyst, says Brittany Lutes, research director at Info-Tech Research Group in an email interview.Most organizations arent sunsetting roles, she says. Instead, theyre more focused on retaining talent and ensuring that talent has the right skills and degree of competency in those skills.It takes time to hire new resources, ensure the institutional knowledge is understood, and then get those people to continue learning new skills or applications of the skills they were hired for, says Lutes. We are better off to retain people, explore opportunities to bring in new levels or job titles with HR to satisfy development desires, and understand what the new foundational and technical skills exist that we need to grow in our organization. We have opportunities to use technology in exciting new ways to make every role from CIO to the service desk analyst more efficient and more engaging. This year I think many organizations will work to embrace that.Related:Brittany Lutes, Info-Tech Research GroupBusiness and Technology Shifts Mean IT Changes?Julia Stalnaya, CEO and founder of B2B hiring platform Unbench, believes IT hiring in 2025 is poised for significant transformation, shaped by technological advancements, evolving workforce expectations and changing business needs.The 2024 layoffs across tech industries have introduced new dynamics into the hiring process for 2025. Companies [are] adapting to leaner staffing models increasingly turn to subcontracting and flexible hiring solutions, says Stalnaya.There are several drivers behind these changes. They include technological advancements such as data-driven recruitment, AI and automation.As a result of the pandemic, remote work expanded the talent pool beyond geographical boundaries, allowing companies to hire top talent from diverse locations. This trend necessitates more flexible work arrangements and a shift in how companies handle employee engagement and collaboration.Related:Skills-based hiring will focus more on specific skills and less on traditional qualifications. This reflects the need for targeted competencies aligned with business objectives, says Stalnaya. This trend is significant for roles in rapidly evolving fields like AI, cloud engineering and cybersecurity.Some traditional IT roles will continue to decline as AI takes on more routine tasks while other roles grow. She anticipates the following:AI specialists who work across departments to deploy intelligent systems that enhance productivity and innovationCybersecurity experts, including ethical hackers, cybersecurity analysts and cloud security specialists. In addition to protecting data, they will also help ensure compliance with security standards and develop strategies to safeguard against emerging threats.Data analysts and scientists who help the business leverage insights for strategic decision-makingBlockchain developers able to build decentralized solutionsHowever, organizations must invest in training and development and embrace flexible work options if they want to attract and keep talent, which may conflict with mandatory return to office (RTO) policies.Related:The 2024 layoffs have had a profound impact on the IT hiring landscape. With increased competition for fewer roles, companies now have access to a larger talent pool. Still, they must adapt their recruitment strategies to attract top candidates who are selective about company culture, flexibility and growth opportunities, says Stalnaya. This environment also highlights the importance of subcontracting.Julia Stalnaya, UnbenchGreg Goodin, managing director of talent solutions company EXOS TALENT expects companies to start hiring to get new R&D projects off the ground and to become more competitive.Dont expect it to bounce back to pandemic or necessarily pre-pandemic levels, says Goodin. IT as a career and industry has reached a maturation point where hypergrowth will be more of an outlier and more consistent 3% to 5% year-over-year growth [the norm]. Fiscal responsibility will become the expectation. Hiring trends will most likely run in parallel with this new cycle with compensation leveling out.Whats Changing, Why and How?Interest rates are higher than they have been in recent history, which has directly influenced companies' hiring practices. Not surprisingly, AI has also had an impact, making workforces more productive and reducing costs.Meanwhile, hiring has become more data-driven, enabling organizations to better understand what full-time and contingent labor they need.During the pandemic, companies continued to hire, even if they didnt have a plan for what the new talent would be doing, according to Goodin.This led to a hoarding of employees and spending countless unnecessary dollars to have people essentially doing nothing, says Goodin. This was one of many reasons companies started to reset their workforce with mass layoffs. Expect more thoughtful, data-driven hiring practices to make sure an ROI is being realized for each employee [hired].The IT talent shortage persists, so universities and bootcamps have been attempting to churn out talent thats aligned with market needs. Companies have also had more options, such as hiring internationally, including H-1B visas.Technology moves at a rapid pace, so it is important to maintain an open mind to new ways of solving problems, while not jumping the gun on a passing fad, says Goodin. Continue to invest in your existing workforce and upskill them, when possible. This will lead to better employee engagement [and] decreased costs associated with hiring and training up new talent into your organization.Soft-skills such as communication, character, and emotional quotient will all be that much more coveted in a world utilizing AI and automation to supplement human-beings, he says.IT and the Business IT has always supported the business, but its role is now more of a partnership and a thought leader when it comes to succeeding in an increasingly tech-fueled business environment.By 2025, I believe IT hiring will reflect a new paradigm as the line between IT and other business functions continues to blur, driven by AIs growing role in daily operations. Instead of being confined to back office support, IT will become a foundational aspect of strategic business operations, blending into departments like marketing, finance, and HR. This blur will likely accelerate next year, with roles and responsibilities traditionally managed by IT -- like data security, process automation and analytics -- becoming collaborative efforts with other departments, says Etoulia Salas-Burnett, director of the Center for Digital Business at Howard University. In an email interview This shift demands IT professionals who can bridge technical expertise with business strategy, making the boundary between IT and other business functions increasingly indistinct.In 2025, she believes several newer roles will become more common, including AI integration specialists, AI ethics and compliance officer, digital transformation strategist and automation success managers. Waning titles include help desk technician and network administrator, she says.Stephen Thompson, former VP of Talent at Docusign says the expansion of cloud services and serverless architectures has driven costs up, absorbing a growing portion of IT budgets. In some cases, server expenses rival the total cost of all employees at certain companies.Enterprise organizations are actively seeking integrations with platforms like Salesforce, ServiceNow, and SAP. The serverless shift and the continuous need for integration engineers have required IT departments to evolve, becoming stronger engineering partners and application developers for critical in-house systems in sales, marketing, and HR, says Thompson in an email interview. As a result, 2025 may resemble the 2012 to 2015 period, with new technologies promising growth, and a high demand for scalable engineering expertise. Companies will seek software engineers who not only maintain but also optimize system performance, ensuring a significant return on investment. These professionals turn the seemingly impossible into reality, saving IT departments millions in the process.Green Tech Will Become More PopularFrom smaller AI models to biodegradable and recycled packaging, tech is necessarily becoming greener.We are already seeing many companies review their carbon footprint and prioritize sustainability projects, in response to climate change [and] customer and client demand. CIOs and other tech leaders will likely face more pressure to prove their sustainability and green plans within their IT projects, says Matt Collingwood, founder & managing director at VIQU IT Recruitment. This may include legacy systems needing to be phased out, tracking energy consumption across the business and supply chain, and more. In turn, this will create an increasing demand for IT roles within infrastructure, systems engineering and development.In the meantime, organizations should be mindful about algorithmic and human bias in hiring.Organizations need to make sure that they are hiring inclusively, says Collingwood. This means anonymizing CVs to reduce chances of unconscious bias, as well as putting job adverts through a gender decoder to ensure the business is not inadvertently putting off great female tech professionals.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How Will AI Shape the Future of Cloud and Vice Versa?
    What role does AI have in the current state of cloud? What types of cloud systems and resources stand to benefit from, or need to adapt to, AI?
    0 Kommentare 0 Anteile 6 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Meta Rebukes Indias WhatsApp Antitrust Ruling, Plans Legal Challenge to $25M Fine
    The social media giants acquisition of WhatsApp is facing growing antitrust scrutiny over data sharing between its other applications.
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Cloud Levels the Playing Field in the Energy Industry
    Matt Herpich, CEO, Conduit PowerNovember 18, 20243 Min ReadAleksia via Alamy StockWe operate as a lean technology startup in the traditionally conservative energy industry. We have to. Going up against $100 billion behemoths requires agility and operational efficiency so we can make smart, quick decisions in the moment and move at the speed of the market. Technology -- specifically digital transformation in the cloud -- has enabled this bold business model, allowing us to bridge the budget gap and compete against much larger competitors that have been in business for decades.But simply declaring youre going to operate in the cloud isnt likely to lead to success. What we set out to do hadnt been done before, but we were lucky enough to be working with two industry leaders that helped us make the right technology decisions during a relatively fast implementation cycle -- the impact of which proved valuable to operations, employee productivity, and morale, especially in a market as competitive as the energy sector.Pioneering Cloud SolutionOur core mission is to build power plants for companies that want to co-locate power generation near where they need it -- for data centers, new industry, and other places that have rapidly growing electricity needs. The ability to remotely operate modern control room systems is mission critical, allowing us to meet resilience, compliance and security requirements of our customers without having to deploy people on-site at every customer plant. Data fuels our remote management capabilities, providing operators fingertip access to all kinds of information about our customers on-site grids, including generation, usage and asset health data, which is fed to a central control center near Houston, Texas.Related:Building a vast wide-area network with high-performance fiber would cost tens of millions of dollars. Some of our well-funded competitors have done this, building massive IT infrastructures across customer sites at a scale that rivals the worlds biggest tech companies. We took a different path, working with Hitachi Energy and Amazon Web Services (AWS) to create a cloud-based network management solution. Moving to the cloud led to a six-month deployment timeline and cost a third of the budget required to build a similar on-premises deployment.Our cloud strategy allows our operators to monitor and control grid assets distributed across the state from a central location and provides fast response, redundancy, disaster recovery, and security services -- all the capabilities youd expect from one of the major players in our field. By working closely with our partners, we can do this without the big budget of our competitors nor hiring or training additional personnelRelated:Keeping Families Together During a DisasterMoving to the cloud provided immediate value. Only months after migrating to the cloud, Hurricane Beryl struck the Texas coastline and disrupted power throughout the state. Our customers needed their power plants up and running at optimal capacity to mitigate the outages.Normally, we would have had to send our operators hundreds of miles on site to oversee plant recoveries -- a costly and time-consuming prospect. However, our cloud-native strategy allowed our operators to simply log on from home where they could maintain operators from a web-based dashboard. Not only did we keep our customers up and running, but we also didnt have to disrupt our workers families during the federally declared disaster.The Cloud Delivers Operational FlexibilityOperating in the energy industry as a lean startup is much easier when you leverage the power of cloud technology to create operational efficiencies, provide stellar experiences to customers and make fast, data-informed decisions that put us one step ahead of larger competitors. Through the cloud, we are able to grow our IT capabilities in line with business growth objectives. While we currently operate plants that generate less than 100 megawatts (MW) of power, well be able to scale our SCADA and network management operations to meet the needs of any sized plant in the future. Well be able to meet this demand without having to over-provision resources in advance or invest millions of dollars in an on-premises data center. And that flexibility is worth its weight in gold.Related:About the AuthorMatt HerpichCEO, Conduit PowerMatt Herpich is CEO of Conduit Power. He previously served as head of finance and operations for Arcadia Powers Texas Energy Services business unit. He came to Arcadia through the acquisition of Real Simple Energy, a Texas-based retail power brokerage, of which he was co-founder. Matt earned a BS in Electrical Engineering from Yale and an MS in Information Technology (big data focus) from Carnegie Mellon.See more from Matt HerpichNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    6 Cloud Trends to Watch in 2025
    Lisa Morgan, Freelance WriterNovember 18, 20247 Min ReadYAY Media AS via Alamy StockBusiness competitiveness is driving organizations deeper into the cloud where they can take advantage of more services. Leading organizations are realizing economic benefits ranging from cost savings and deeper insights to successful innovations. Artificial intelligence is driving an increase in cloud usage.We anticipate a continued growth of a few significant cloud trends for 2025, with the rise of GenAI being a major driver, says John Samuel, global CIO and EVP at CGS (Computer Generated Solutions), a global IT and outsourcing provider. Cloud providers are heavily investing in GenAI technologies, collaborating with chip manufacturers to enhance performance and scalability. This partnership enables cloud platforms to power a growing ecosystem of downstream SaaS providers that are building solutions to allow easier adoption of AI-based solutions. As a result, GenAI is becoming a key enabler for adopting advanced AI capabilities across industries, with cloud acting as the backbone.Mike Stawchansky, chief technology officer at financial services software applications provider Finastra, warns that privacy concerns and contractual ambiguity around the rights to utilize customer data for GenAI will become more of an issue. Customers want the insights and efficiencies GenAI can deliver but may not be willing to grant more extensive access to their data.Related:Capacity issues are becoming more frequent as organizations grapple with the resource-heavy workloads that AI-powered technologies bring. Further, expansion into other cloud regions may hold businesses back as different regions present their own unique compliance and data residency challenges, says Stawchansky in an email interview. GenAI is going to continue to put pressure on businesses to be better, faster, and more efficient. Early adopters are seeing gains, so those who have not yet begun to experiment with the technology risk falling behind.Cloud security will also become more of an issue, however. Security teams will begin to harness AI assistance to automate response processes for cloud-based exposure and threat detection.The volume of exposures and threats, combined with varying experience levels in SecOps teams, means that effective remediation relies on the ability to guide team members with prescriptive remediation procedures using AI. This will see mainstream adoption in 25, says Or Shoshani, co-founder and chief executive officer at real-time cloud security company Stream.Security. Enterprises have done little to evolve their detection and response capabilities to meet the unique aspects of the cloud environment. They are relying on processes and technology designed for securing on-prem infrastructures and its insufficient. Its a combination of lack of awareness of the problem, in addition to inertia.Related:Following are some more cloud trends to watch in 2025:1. Multi- and hybrid clouds will become more commonCloud providers recognize that customers prefer to leverage multiple cloud platforms for flexibility, risk mitigation, and performance optimization. In response, they are enabling inter-cloud operability, which enables users to perform analytics and utilize data across cloud providers without moving their data, according to CGS Samuel.Enterprises [and] small- to medium-sized businesses appear well-prepared for upcoming cloud trends like GenAI adoption and multi-cloud strategies. Cloud providers are responding by enabling technologies that reduce on-premises infrastructure needs, making it easier for companies to offload workloads to the cloud, Samuel says.Faiz Khan, founder & CEO at multi-cloud SaaS and managed service provider Wanclouds, says the major public cloud providers eliminated data transfer fees over the last year, making it easier to migrate data from one public cloud provider to another.Related:"By adopting a multi-cloud approach, you can train your distributed AI workloads and models across multiple environments. For instance, there could be a benefit to using Azure's computing power to train one AI model and AWS for another. Or you could keep your legacy cloud workloads on one public cloud and then your AI workloads on a separate public cloud, says Khan in an email interview. This approach enables enterprises to tailor their cloud environment to the needs of each AI application. It's also become a lot cheaper to migrate these applications across public clouds if the environment or needs change.However, time and cost can slow adoption. Businesses need sufficient time to research and implement new cloud solutions, and the confidence that the shift will deliver the cost optimization they expect. Balancing immediate costs with long-term cloud benefits is an important consideration.2. CISOs will need better cloud monitoringSOC and the SecOps teams will need to integrate cloud context into their day-to-day detection and response operations in 2025 to effectively detect and respond to exposures and threats in real time.Most SecOps teams are still relying on alert-based tools designed for on prem environments that are missing information related to exposure and attack path across all elements of the cloud infrastructure, saysStream.SecuritysShoshani. This results in an inability to identify real threats and massive amounts of time [to investigate] false positives.3. Cloud spending will increaseWanclouds Khan says most organizations will increase their cloud spending substantially in 2025.Like other aspects of IT, AI will be the force behind most of the trends occurring in the cloud in 2025. AI is going to drive a big spending boom in the cloud next year. Organizations need to increase the amount of cloud resources they have to be able to handle the compute GenAI model training requires, says Khan. Furthermore, we're also seeing IT teams now spending on new AI tools and features that can be utilized to improve and automate cloud management."4. Landing zones will gain more tractionLanding zones provide a standardized framework for cloud adoption. They are becoming more prominent as they address scalability and security concerns.Cloud providers are putting together templates for various industry verticals, such as finance and healthcare, that will allow customers to build solutions for regulatory environments much faster, saysFinastrasStawchansky. Most enterprises will be some way along their cloud-adoption and migration roadmaps today. Its just a question of how well-equipped they are for scaling their capabilities, especially as they seek to operationalize resource-heavy technologies, such as LLMs and GenAI. Having structured ways to approach scaling resources, while efficiently harnessing this technology will be crucial for ensuring ROI.5. Cybersecurity resilience will use digital twins for ransomware war gamesCyber recovery rehearsals will reach a new level of sophistication as organizations aim for ever faster recovery times in todays hybrid and multi-cloud environments.Cyber criminals are now using AI to increase the frequency, speed and scale of their attacks. In response, organizations will also use AI -- but this time, to fight back, says Matt Waxman, SVP and GM of data protection at secure multi-cloud data management company Veritas Technologies. As we know, the key to success is all in the preparation, so much of this work is going to be done in advance, using AI to predict the best response when ransomware inevitably hits.Organizations will play out ransomware wargames using cloud-based digital twins in AI-powered simulations of every possible attack scenario across entire infrastructures -- from edge to core to cloud.Plans are one thing, but an organization cant claim resilience without proving that those plans have been pressure tested. More than a nice-to-have, these advanced rehearsals will soon become mandated by regulation, says Waxman.6. Cyberspace will extend to outer spaceSatellite connectivity is growing, though Waxman says space-based computing may get a nudge in 2025.As humans return to the moon for the first time in more than 50 years aboard NASAs Artemis II, technology visionaries will be re-inspired to explore the possibilities of space-based computing, says Waxman. Datacenters in space present many benefits. For example, the unique environmental conditions mean that much less energy is required to spin disks or cool racks. However, there are also obvious challenges, such as transmission latency, which makes storage in space more effective for data that only needs accessed occasionally, like backup data.Spurred by the promise of datacenters freed from atmospheric constraints, in 2025, visionaries will begin to set their minds to overcoming the barriers to computing in space, he says.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 8 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Generative AI: Reshaping the Semiconductor Value Chain
    Marco Addino, Managing Director, AccentureNovember 15, 20244 Min ReadPanther Media GmbH via Alamy StockWithout doubt, todays society relies on the semiconductor industry. After all, can you imagine a world without smartphones, cars, power stations, and televisions? We, as people, and the global economy more broadly, rely on continued innovation from the chips the industry produces. But there are challenges facing these companies across the board -- design, manufacturing and demand. Talent is in increasingly short supply, and on top of that geopolitical tensions and onshore manufacturing add another layer of complexity. The industry keeps having hurdles to cross, one after another it seems. Only recently, another problem for the industry made the headlines when Hurricane Helene hit Spruce Pine, one of the worlds most important locations for semiconductor, raising questions about the impact it would have.It's already tough enough for semiconductor companies to deal with and resolve these issues, but they are appearing while generative AI has made the need for innovation a must do now, not a must do at some point. The question is whether the semiconductor industry can reinvent itself quickly enough for this new generative AI moment. Accenture analysis found reinventors (those companies that have already built the capability for continuous reinvention) increased revenues by 15 percentage points over other companies between 2019 and 2022. We expect that gap in revenue growth between reinventors and the rest to increase by 2.4 times to 37 percentage points by 2026, so theres a clear opportunity for them. Yet our survey of global semiconductor executives found that 71% believe it will take at least three years for the semiconductor industry to deploy generative AI at scale. The industry could do with that timeframe accelerating somewhat.Related:The Challenges AheadIts not going to be easy, however, of course it wont. But semiconductor companies need to use generative AI across the entire spectrum -- spanning design and manufacturing, through sales and marketing, to customer service --to seize opportunities for innovation in both the short- and long-term. Adapting that broad view across the value chain is a must to reinvent the value chain, however daunting that may initially seem.There are other concerns too, such as IP. In fact, 73% of executives cite IP concerns as the biggest barrier to generative AI deployments. Then theres of course the cost issue and the need to balance technical debt with investments for the future, both of which, are necessary.Once leaders grapple with how those challenges can be overcome, theres another pressing challenge and thats having the right talent in place to deploy these applications successfully.Related:Most semiconductor companies are already fully aware of that and are doing everything they can to accelerate gaining new talent and reskilling their existing workforce. However, the speed with which generative AI is changing the way businesses work means they must also get support from across their ecosystem to ensure they have all critical skills in place.It's Time for Leaders to Place Their BetsThe industry needs to move forward with two workstreams running in parallel. First, CEOs and other business leaders must make no regrets moves; those use cases with the lowest risk, shortest time to show results and therefore, value. For example, Generative AI-enabled field service assistants would allow field service engineers to perform root cause analyses faster and recommend repair methods based on machine data, therefore reducing downtime and accelerating production. It also provides immediate access to information that helps technicians increase their knowledge, therefore helping with the skills gap. Generative AI can also be used in other areas, such as sales and marketing where it can improve the quality and level of personalization of the content to drive more personalized campaigns.Related:At the same time, strategic bets need to be decided upon to support the long-term goals of the business. An example of this is in process engineering. Generative AI-enabled applications that incorporate historical process parameter data to create more efficient designs for semiconductor equipment and wafer development. These tools can use drawings, text, images and more to create customized outputs that engineers can use to augment experiments, allowing for a more objective approach to experimental design. These strategic bets will be the things that will offer the highest value. They may well take some time to roll out, but they could pave the way for total reinvention and therefore, competitive advantage.Whether the no regret moves or strategic bets, the guiding principle is choosing the right use cases, at the right point and at the right time. Every semiconductor companys generative AI journey is different, but the approaches will be similar. All companies must establish a solid data foundation, have the necessary skills in place, and importantly, have the right ecosystem in position. Those that come out on top wont just be the best player, but the businesses that put the right connections in place.About the AuthorMarco AddinoManaging Director, AccentureMarco Addino is a managing director in Accentures high tech industry practice leading the companys semiconductor business in EMEA, and is the client account lead for Italy, Central Europe and Greece, responsible for building and growing strategic relationships in the region. He is experienced in high complexity products engineering, supply chain and operations, large scale digital and technology transformations, organizational design, post-merger integration, and the design and implementation of platform business models.See more from Marco AddinoNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 8 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Edge Extending the Reach of the Data Center
    Companies are keeping their central data centers, but theyre also moving more IT to the enterprise edge. The result is a re-imagined concept of data center that includes the data center but also subsumes cloud and other edge-computing operations.In this expanded data center model, ITs role hasnt fundamentally changed. It must still implement, monitor and maintain data center operations, no matter where they occur.But since IT staff cant be at all remote locations at once, software and hardware technologies are being called upon to do the job of facilitating end-to-end data center management, no matter where that management is.Technologies to Facilitate Remote Data Center ManagementTo assist IT in managing the expanded data center, tools and technology solutions must do two key things: monitor and manage IT operations, functions and events; and automate IT operations.Here are five technologies that help:System on a chip (SOC). First conceived in the 1970s, system on a chip embeds processing, memory and, today, even security and artificial intelligence on a single chip. The chip powers a device or network endpoint.SOC can appear in a router, sensor, smartphone, wearable, or any other Internet of Things (IoT) device. The original selling point of SOCs was their ability to offload processing from the central data center and reduce latency when processing can be done locally.Related:Now, these SOC routers, devices, and access points come with embedded security that is WPA2/3 compliant and can encrypt data and block DNS attacks or suspicious websites. That security is complemented with AI that aids in threat detection and in some cases, threat mitigation, such as being able to automatically shut down and isolate a detected threat.To use SOC threat detection and mitigation at the edge, IT must:Ensure that the security ruleset on edge devices is in concordance with corporate-wide data center security policies; andEmploy an overarching network monitoring solution that can integrate the SOC-based security with central data center security and monitoring so every security action can be observed, analyzed, and mitigated from a single pane of glass in the central data center.Zero-trust networks. Zero-trust networks trust no one with unlimited access to all network segments, systems, and applications. In the zero-trust scheme, employees only gain access to the IT resources they are authorized for.Users, applications, devices, endpoints, and the network itself can be managed from a central point. Internal network boundaries can be set to allow only certain subsets of users access. An example is a central data center in Pittsburgh with a remote manufacturing plant in Phoenix. A micro network can be defined for the Phoenix plant that can only be used by the employees in Phoenix. Meanwhile, central IT has full network management, monitoring, and maintenance capability without having to leave the central data center in Pittsburgh.Related:Automated operations. Data and system backups can be automated for servers deployed at remote points, whether these backups are ultimately rerouted to the central data center or a cloud service. Other IT functions that can be automated with guidance from an IT ruleset include IT resource provisioning and de-provisioning, resource optimization, and security updates that are automatically pushed out for multiple devices.Its also possible to use remote access software that allows IT to gain control of a users remote workstation to fix a software issue.Edge data centers.Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift.Related:Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed.In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. The catch is that there are still many sites that manage their IT with a hodgepodge of different types of management software.A single pane of glass. At some point, those IT departments with multiple network monitoring software packages will have to invest in a single, umbrella management system for their end-to-end IT. This will be necessary because the expanding data center is not only central, but that could be in places like Albuquerque, Paris, Singapore and Miami, too.ITs end goal should be to create a unified network architecture that can observe everything from a central point, facilities automation, and uses a standard set of tools that everybody learns.Are We There Yet?Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this uber management network concept.The rise of remote work and the challenge of managing geographically dispersed networks have driven the demand for network management system (NMS) solutions with robust remote capabilities, reports Global Market Insights, adding, As enterprises increasingly seek remote network management, the industry is poised for substantial growth.
    0 Kommentare 0 Anteile 9 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Building an Augmented-Connected Workforce
    John Edwards, Technology Journalist & AuthorNovember 15, 20245 Min ReadSasin Paraksa via Alamy Stock PhotoIn their never-ending quest to improve efficiency and productivity, a rapidly growing number of enterprises are currently building, or planning to build, augmented-connected workforces. An augmented-connected workforce allows humans and machines to work together in close partnership. The goal is people and devices functioning more productively and efficiently than when working in isolation.An augmented-connected workforce can be defined as a tech-enabled workforce of humans that have access to next-generation technologies, such as AI, IoT, and smart devices, to do their day-to-day jobs, says Tim Gaus, a principal and smart manufacturing business leader with Deloitte Consulting, in an online interview. "These technologies add a level of intelligence and efficiency for employees by providing skills that humans dont possess while allowing workers to focus on higher level, strategic work." In general, augmented-connected workforces allow for a more dynamic, connected work environment that prepares human team members to work seamlessly with high technology devices.Building the CaseToday's workforce is moving rapidly toward an integrated, interconnected ecosystem of workers and technology. "By evolving our mindset on what a workforce is, it becomes clear that an augmented-connected workforce provides the most potential," Gaus says.Related:An augmented-connected workforce's benefits vary significantly depending on the type of augmentation being applied, says Melissa Korzun, vice president of customer experience operations at technology services firm Kantata. On the whole, however, it can reduce errors, decrease costs, improve quality, and even contribute to safer working conditions in manufacturing sectors, she notes in an email interview.Other potential benefits include faster training and upskilling, improved safety, enhanced efficiency, and better cost management. "In manufacturing, for example, as businesses look to expand production capabilities, using innovative tools designed for workers can help streamline processes, leading to faster time-to-market," Gaus explains.Korzun notes that in the business sector an augmented-connected workforce promises to build significant administrative efficiency. It can, for example, reduce the time needed to process large volumes of information while creating the ability to summarize unstructured data sets. Companies that take advantage of these new assistive capabilities will benefit from improved productivity, increased quality, and less burnout in their workforce, she says.Related:As organizations continue to scale their augmented-connected workforces, additional benefits are likely to emerge. "Life sciences, for example, has seen a huge benefit in leveraging computers to expedite data analysis and then pairing humans to use these discoveries to create new therapies for diseases," Gaus says. He expects that many other discoveries will emerge across industries over time, leading to innovations as well as new opportunities to engage customers.Virtual AssistanceAn augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview.An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains.Related:Building a PlanTo keep pace with competitors, businesses should develop a comprehensive strategy for utilizing new technologies, including establishing a cross-functional team that's dedicated to identifying critical areas where technology augmentation can help solve core business challenges, Korzun says. "There are lots of shiny objects out there to chase right now -- focus on applying new tech capabilities to your most critical business issues." To assist with planning, she advises IT leaders to talk with their vendors about their current augmented-connected workforce technologies and their roadmaps for the future.For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success. "Innovative tools are the starting point, but finding ways to make human operations more efficient will lead to true impact."Final ThoughtsWhile many enterprises have already begun integrating emerging technologies into routine tasks, innovation alone without considering the role humans will play within the new model can lead to slower progress in an augmented-connected model, Gaus warns. "Humans are much more likely to engage with and utilize technology they understand and trust." The other piece of the puzzle is ensuring that workers are appropriately skilled in the new technologies entering the business.Businesses must continue to embrace technology and digital transformation in order to build the most dynamic workforce possible, Gaus states. "Doing so will maximize their technology investment and create a more connected, reliable workforce."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 9 Ansichten
  • WWW.INFORMATIONWEEK.COM
    TSMC Secures $6.6B as Biden Administration Races to Dole Out CHIPS Act Funds
    With uncertainty about how a new Trump Administration will handle the $52.7 billion program, the outgoing administration is under pressure to make good on one of its signature legislative wins.
    0 Kommentare 0 Anteile 9 Ansichten
  • WWW.INFORMATIONWEEK.COM
    What Could the Trump Administration Mean for Cybersecurity?
    The results of the 2024 US presidential election kicked off a flurry of speculation about what changes a second Donald Trump administration will bring in terms of policy, including cybersecurity.InformationWeek spoke to three experts in the cybersecurity space about potential shifts and how security leaders can prepare while the industry awaits change.Changes to CISAIn 2020, Trump fired Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs after he attested to the security of the election, despite Trumps unsupported claims to the contrary. It seems that the federal agency could face a significant shakeup under a second Trump administration.The Republican party believes that agency has had a lot of scope creep, AJ Nash, founder and CEO of cybersecurity consultancy Unspoken Security, says.For example, Project 2025, a policy playbook published by conservative think tank The Heritage Foundation, calls to end CISAs counter-mis/disinformation efforts. It also calls for limits to CISAs involvement in election security. The project proposes moving the CISA to the Department of Transportation.Trump distanced himself from Project 2025 during his campaign, but there is overlap between the playbook and the president-elects plans, the New York Times reports.Related:I think it safe to say that CISA is going to have a lot of changes, if it exists at all, which I think [is] challenging because they have been very responsible for both election security and a lot of efforts to curb mis-, dis- and malinformation, says Nash.AI Executive OrderIn 2023, President Biden signed an executive order regarding AI and major issues that arose in the wake of its boom: safety, security, privacy, and consumer protection. Trump plans to repeal that order.We will repeal Joe Bidens dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing, according to a 2024 GOP Platform document.Less federal oversight on the development of AI could lead to more innovation, but there are questions about what a lack of required guardrails could mean. AI, how it is developed and used, has plenty of ramifications to cybersecurity and beyond.The tendency of generative AI to hallucinate or confabulate that's the concern, which is why we have guardrails, points out Claudia Rast, chair of the intellectual property, cybersecurity, and emerging technology practice at law firm Butzel Long.Related:While the federal government may step back from AI regulation, that doesnt mean states will do the same. You're going to see California [and] Texas and other states taking a very proactive role, says Jeff Le, vice president of global government affairs and public policy at cybersecurity ratings company SecurityScorecard.California Governor Gavin Newsom signed several bills relating to the regulation of GenAI. A bill -- the Texas Responsible AI Governance Act (TRAIGA) -- was introduced in the Lone Star State earlier this year.Cybersecurity RegulationThe Trump administration is likely to roll back more cybersecurity regulation than it will introduce. I fully anticipate there to be a significant slowdown or rollback on language or mandated reporting, incident reporting as a whole, says Le.Furthermore, billionaire Elon Musk and entrepreneur Vivek Ramaswamy will lead the new Department of Government Efficiency, which will look to cut back on regulation and restructure federal agencies, Reuters reports.But enterprise leaders will still have plenty of regulatory issues to grapple with. They'll be looking at the European Union. They'll be looking at regulations coming out of Japan and Australia they'll also be looking at US states, says Le.That's going to be more of a question of how they're going to navigate this new patchwork.Related:Cyber Threat ActorsNation state cyber actors continue to be a pressing threat, and the Trump administration appears to be planning to focus on malicious activity coming out of China, Iran, North Korea, and Russia.I do anticipate the US taking a more aggressive stance, and I think that's been highlighted by the incoming national security advisor Mike Waltz, says Le. I think he has made a point to prioritize a more offensive role, and that's with or without partners.Waltz (R-Fla.) has been vocal about combatting threats from China in particular.Preparing for ChangePredicting a political future, even just a few short months away, is difficult. With big changes to cybersecurity ahead, what can leaders do to prepare?While uncertainty prevails, enterprise leaders have prior cybersecurity guidelines at their fingertips today. It's time to deploy and implement the best practices that we all know are there and [that] people have been advising and counseling for years at this point, says Rast.
    0 Kommentare 0 Anteile 8 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Where IT Consultancies Expect to Focus in 2025
    In the past few years, artificial intelligence has dominated New Years predictions. While the same can be said about 2025, scalability, responsibility, and safety will be stronger themes.For example, global business and technology consulting firm West Monroe Partners sees data and data governance being major focus areas.Its no longer just about quick wins or isolated use cases. The focus is shifting towards building robust data platforms that can support long-term business goals as they move forward, says Cory Chaplin, technology and experience practice leader at West Monroe. A key part of this evolution is ensuring that organizations have the right data foundation in place which in turn allows them to harness the full potential of advanced uses like analytics and AI.Efforts Will Focus on Responsible and Safe UseGenAI has caught the attention of boards and CEOs, but its success hinges on having clean, accessible data.Much of whats driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities, says Chaplin. AI is part of this equation, but data remains the foundation that everything else builds upon.West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge.Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them, says Chaplin. As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.Right now, organizations face significant challenges keeping pace with rapid technological advancements, especially with AI evolving so quickly. While many organizations have built substantial product and data teams, their ability to adapt and innovate at business speed often falls short.Cory Chaplin, West MonroeIts not just about having the right headcount. Its about the capacity to move quickly and embrace new technologies, says Chaplin. Even with skilled talent, internal teams can get bogged down by established processes and pre-existing organizational structures. The demand for specialized expertise in AI and data-driven fields continues to outpace supply, complicating their transformation journeys. This is where we provide the support needed to challenge existing paradigms and accelerate their progress.Over the last few years, there has been a gap between expectations and progress. Despite the hype surrounding AI, data, and new technologies, many organizations have struggled to realize the full value of their investments, irrespective of industry.Organizations are tired of chasing buzzwords, says Chaplin. They want AI to be a productive part of their operations, working behind the scenes to enhance existing platforms, support their teams and drive growth. They [also] want help embedding AI into their current operations, ensuring that its not just another shiny tool, but a core driver of growth and efficiency within existing business operations.AI Plus ModernizationThe demand for AI/ML and GenAI is growing across industries, particularly in areas like automation, predictive analytics, and personalized customer experiences. Data and analytics remain crucial as businesses aim to harness their data to make smarter, faster decisions. Cloud and application modernization are also essential as many organizations want to update legacy systems, improve agility, and adopt cloud-native technologies.Many clients need help with scalability, technology integration and data modernization. They may need help with outdated systems, underutilized data or the complexities of adopting new technologies, particularly in highly regulated industries like life sciences and energy, says Stephen Senterfit, president of enterprise business consultancy Smartbridge, Additionally, the rapid pace of innovation can make it hard for businesses to know where to focus their resources.With this help, enterprises should see improved operational efficiencies, better data-driven decision-making and more robust customer engagement. They will also be able to scale rapidly, remain competitive in their respective industries and innovate in ways that were previously out of reach.Smartbridge's relationship with clients is evolving from technology service provider to strategic partner, says Senterfit. Clients expect us to help them navigate broader digital strategies, advise them on tech implementation and innovation roadmaps, and future-proof their business models.AI-Related Change Management and UpskillingAs AI continues to become increasingly mainstream, theres a growing demand for organizational design, change management, and upskilling services designed to get more out of new ways of working and managing organizational shifts.Clients are increasingly asking, How do we build AI into our business? says West Monroes Chaplin. This isnt just about implementing new technologies, its about preparing the workforce and the organization to operate in a world where AI plays a significant role. Theres momentum building around this intersection of organizational design, change management, and upskilling -- helping companies function effectively in an AI-driven environment.CybersecurityAs businesses adopt AI, use more data, and deploy new emerging technologies, cybersecurity becomes even more critical.With increased platform adoption, securing confidential information is paramount. We see a renewed emphasis on secure software development practices and tighter controls on AI/ML model usage, ensuring protection as organizations scale their AI initiatives, says Chaplin. By understanding and utilizing their data effectively, organizations can foster a culture where data-driven insights inform decision-making processes. This not only enhances operational efficiency but also drives innovation across the business.Organizations should consider data a critical asset that requires attention and strategic use. This requires a mindset shift that can lead to improved outcomes across various functions, particularly in cybersecurity, where organizations can de-risk their operations even amid rapid changes.With platform-enabled environments, organizations can reduce their reliance on fully custom solutions. By leveraging existing platforms and their roadmaps, companies can enhance their agility and speed of implementation, says Chaplin. This approach allows for a greater emphasis on building from proven solutions rather than creating from scratch, ultimately facilitating quicker adaptations to market demands.Data and Customer FocusAs companies increasingly focus on digital transformation, data-driven decision-making and improving customer engagement, they look to consultancies for help.Our data engineering practice will play a central role in helping businesses migrate from legacy systems to the cloud, a significant challenge for many organizations as they modernize their analytical workloads, says Alex Mazanov, CEO at full service consulting firm T1A. By 2025, we anticipate an even greater demand for scalable, cloud-based data architectures capable of handling vast amounts of real-time data. Many organizations are moving away from outdated legacy systems, such as SAS, to modern cloud platforms like Databricks.Continued data explosion, combined with AI advances, is pushing companies to modernize their data infrastructure.Businesses are increasingly challenged to make faster, smarter decisions, and well provide the tools and expertise to architect solutions that scale with their needs, ensuring data is a true asset rather than a burden, says Mazanov. Additionally, transitioning to open-source platforms and government-compliant technologies will help businesses stay agile, cost-efficient and aligned with regulatory demands.AI is also becoming more prevalent in CRM scenarios because it increases productivity, reduces costs, and helps maximize customer lifetime value. Specifically, they want to enhance loyalty programs, improve customer retention and use data analytics to predict behavior across the entire customer lifecycle.Optimizing the customer journey will continue to be crucial in 2025, as businesses will increasingly focus on maximizing customer lifetime value [using] advanced tools and strategies to improve every touchpoint in the customer journey, says Mazanov. Many companies struggle to optimize this.Finally, process intelligence will be even more critical by 2025, as companies continue to streamline operations, reduce inefficiencies, and cut costs in an increasingly competitive market. AI and machine learning will be used to automate and optimize business processes. As industries move toward hyper-automation, Mazanov says clients will need to become more agile and efficient.Organizations are constantly seeking ways to reduce operational costs while improving efficiency, he says. By 2025, companies will face rising expectations to do more with less, and process intelligence will be a vital tool to achieve this. Our solutions will focus on creating smarter, more efficient workflows, powered by AI to reduce manual tasks and human error.Stephen Senterfit, SmartbridgeMany organizations are experiencing the dichotomy of being challenged by the complexity of their data and needing real-time insights. Meanwhile, customer expectations continue to grow.[Our relationship with clients [is] evolving from being a service provider to a strategic partner. By 2025, we anticipate playing a more consultative role, helping clients not just implement technology but also reimagine their business models around data and AI, says Mazanov. Well be focused on long-term partnerships, co-creating innovative solutions that align with their broader business strategy.Get Help When You Need ItCompanies have many different reasons for seeking outside assistance. Sometimes the engagement is tactical and sometimes its strategic. The latter is becoming more common because it drives more value.One of the least valuable engagements is hiring a consultancy to solve a problem without internal involvement. When the consultants conclude their arrangement, considerable valuable knowledge may be lost. Working as a partner results in greater transparency and continuity.One benefit of using consultants, not mentioned above but critically important, is insight clients may lack, such as having a deep understanding of how emerging technology is utilized in the clients particular industry and whats worked best for other industries and why, which can result in important insights and innovations. They also need to understand the clients business goals so that IT implementations deliver business value.
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    From Declarative to Iterative: How Software Development is Evolving
    Lisa Morgan, Freelance WriterNovember 12, 20246 Min ReadDragos Condrea via Alamy StockSoftware development is an ever-changing landscape. Over the years, it has become easier to generate high-quality code faster, though the definition of faster is a moving target.Take low-code tools, for example. With them, developers can build most of the functionality they need with the platform, so they only need to write the custom code the application requires. Low-code tools have also democratized software development -- particularly with the addition of AI.GenAI is accelerating development even further, and its changing the way developers think about code.Siddharth Parakh, senior engineering manager at Medable, expects Ai to revolutionize productivity.The ability for AI to automate repetitive tasks, refactor code and even generate solutions from scratch would allow developers to focus on higher-order problem-solving and strategic design decisions, says Parakh in an email interview. With AI handling routine coding, developers could become orchestrators of complex systems rather than line-by-line authors of software.But theres a catch: Currently, AI-generated code cannot fully replace human intuition in areas such as creative problem solving, contextual understanding, and domain-specific decision-making. Also, AI models are only as good as the data they are trained on, which can lead to bias issues, error propagation or unsafe coding practices, he says. Quality control, debugging, and nuanced decision-making are still areas where human expertise is necessary.Related:How AI HelpsThe operative work is automation.If AI takes over the majority of coding tasks, it would drive unprecedented efficiency and speed in software development, says Medables Parakh. Teams could iterate faster, adapt to changes more fluidly and scale projects without the traditional bottlenecks of manual coding. This could democratize software development, enabling non-experts to create functional software with minimal input.Geoffrey Bourne, co-founder of social media API company Ayrshare, says GenAI coding assistants are now an integral part of his coding.They produce lines of code which save me hours on a weekly basis. But, although the results are improving, theyre correct less than 40% of the time. You need the experience to know the code just isnt up to scratch and needs adjusting or a redo, says Bourne in an email interview. Newbie coders are starting out with these assistants at their fingertips but without the years of experience writing code their seniors have. Weve got to take this into account and not necessarily limit their access but find creative ways to inject that knowledge. You need to find a balance [between] the instant code fix with healthy experience and a critical eye.Related:The evolution of programming, especially through abstraction layers and GenAI, has significantly transformed the way Surabhi Bhargava, a machine learning tech leadat Adobe, approaches her work.GenAI has made certain aspects of development much faster. Writing boilerplate code, prototyping and even debugging is now more streamlined. Finding information across different documents is easier with AI and copilots, says Bhargava in an email interview. [Though] AI can speed things up, I now [must] critically assess AI-generated outputs. It has made me more analytical in reviewing the work produced by these systems, ensuring it aligns with my expectations and needs, particularly when handling complex algorithms or compliance-driven work.AI tools are also helping her create rapid prototypes and theyre reducing the cognitive load.I can focus more on strategic thinking, which improves productivity and gives me room to innovate, says Bhargava. Sometimes, its tempting to lean too heavily on AI for code generation or decision-making. AI-generated solutions arent always optimized or tailored for the specific needs of a project, resulting in bugs and issues in prod. [And] sometimes, it takes more time to set it up if the tools are complex to use.Related:Hands-Free Coding Still Hasnt ArrivedAt present, AI struggles with its own set of issues such as misinterpretation, hallucination and incorrect facts. Over-reliance on AI-generated code could lead to a lack of deep technical expertise in development teams.With humans less involved in the nitty-gritty of coding, we could see a decline in the essential skills needed to debug, optimize, or creatively problem-solve at a low level. Additionally, ethical and security concerns could arise as AI systems might unknowingly introduce vulnerabilities or generate biased solutions, says Parakh. Tom Taulli, author of AI-Assisted Programming: Better Planning, Coding, Testing, and Deployment has been using AI-assisted programming tools for the past couple years. This technology has had the most transformative impact by far on his work in his over 40-year work history.Whats interesting is that I approach a project in terms of natural language prompts, not coding or doing endless searchers on Google and StackOverflow. In fact, I set up a product requirements document that is a list of prompts. Then, I go through each one for the development of an application, says Taulli. These systems are far from perfect. But it only takes a few seconds to generate the code -- and this means I have more time to review it and make iterations.Taulli has been a backend developer primarily, but AI assisted programming has allowed him to do more front-end development.The funny thing is that one of the biggest drawbacks is the pace of innovation with these tools. It can be tough to keep up with the many developments, says Taulli. True, there are other well-known disadvantages, such as with security and intellectual property. Is the code being copied? Do you really own the code you create? says Taulli. However, I think one of the biggest drawbacks is the context window. Basically, the LLMs cannot understand the large codebases. This can make it difficult for sophisticated code refactoring..Another issue is the cut-off date of the LLMs. They may not have the latest packages and frameworks, but the benefits outweigh the drawbacks, he says.Tom Jauncey, head nerd at digital marketing agency Nautilus Marketing, says GenAI tools like GitHub Copilot have accelerated the coding process by letting him think about high-level architecture and design. His advice is to use AI to save time on boilerplate code and documentation.Some of the things that I had to learn were how to prompt AI tools and think critically about their output. It is important to remember that while AI is great at generative code, it doesn't always understand broader context and business requirements, says Jauncey. Thus, always cross-check the AI-generated code with official documentation. AI-powered tools ease the effort of exploring a new language or framework without having to go into syntax details.Edward Tian, CEO of GPTZero, believes its better to use GenAI to assist coding rather than relying on it entirely.Personalization is such a key aspect of coding, and GenAI sometimes just cant quite personalize things in the way you want. It can certainly create complicated code, but it just often falls short in terms of uniqueness, says Tian.Bottom LineGenAI is accelerating development by generating code quickly but beware of its limitations. While its good for writing boilerplate code and documentation, creating quick prototypes and debugging, its important to verify the outputs. Prompt engineering skills also help boost productivity.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 8 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Unicorn AI Firm Writer Raises $200M, Plans to Challenge OpenAI, Anthropic
    The company -- taking direct aim at OpenAI, Anthropic, and other incumbents in the GenAI arms race -- plans to use the funding to fuel its agentic AI efforts.
    0 Kommentare 0 Anteile 7 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How IT Can Show Business Value From GenAI Investments
    Nishad Acharya, Head of Talent Network, TuringNovember 11, 20244 Min ReadNicoElNino via Alamy StockAs IT leaders, were facing increasing pressure to prove that our generative AI investments translate into measurable and meaningful business outcomes. It's not enough to adopt the latest cutting-edge technology; we have a responsibility to show that AI delivers tangible results that directly support our business objectives.To truly maximize ROI from GenAI, IT leaders need to take a strategic approach -- one that seamlessly integrates AI into business operations, aligns with organizational goals, and generates quantifiable outcomes. Lets explore advanced strategies for overcoming GenAI implementation challenges, integrating AI with existing systems, and measuring ROI effectively.Key Challenges in Implementing GenAIIntegrating GenAI into enterprise systems isnt always straightforward. There are several hurdles IT leaders face, especially surrounding data and system complexity. Data governance and infrastructure. AI is only as good as the data its trained on. Strong data governance enforces better accuracy and compliance, especially when AI models are trained on vast, unstructured data sets. Building AI-friendly infrastructure that can handle both the scale and complexity of AI data pipelines is another challenge, as these systems must be resilient and adaptable.Related:Model accuracy and hallucinations. GenAI models can produce non-deterministic results, sometimes generating content that is inaccurate or entirely fabricated. Unlike traditional software with clear input-output relationships that can be unit-tested, GenAI models require a different approach to validation. This issue introduces risks that must be carefully managed through model testing, fine-tuning, and human-in-the-loop feedback.Security, privacy, and legal concerns. The widespread use of publicly and privately sourced data in training GenAI models raises critical security and legal questions. Enterprises must navigate evolving legal landscapes. Data privacy and security concerns must also be addressed to avoid potential breaches or legal issues, especially when dealing with heavily regulated industries like finance or healthcare.Strategies for Measuring and Maximizing AI ROIAdopting a comprehensive, metrics-driven approach to AI implementation is necessary for assessing your investments business impact. To ensure GenAI delivers meaningful business results, here are some effective strategies:Define high-impact use cases and objectives: Start with clear, measurable objectives that align with core business priorities. Whether its improving operational efficiency or streamlining customer support, identifying use cases with direct business relevance ensures AI projects are focused and impactful.Quantify both tangible and intangible benefits: Beyond immediate cost savings, GenAI drives value through intangible benefits like improved decision-making or customer satisfaction. Quantifying these benefits gives a fuller picture of the overall ROI.Focus on getting the use case right, before optimizing costs: LLMs are still evolving. It is recommended that you first use the best model (likely most expensive), prove that the LLM can achieve the end goal, and then identify ways to reduce cost to serve that use case. This will make sure that the business need is not left unmet.Run pilot programs before full rollout: Test AI in controlled environments first to validate use cases and refine your ROI model. Pilot programs allow organizations to learn, iterate, and de-risk before full-scale deployment, as well as pinpoint areas where AI delivers the greatest value, learn, iterate, and de-risk before full-scale deployment.Track and optimize costs throughout the lifecycle: One of the most overlooked elements of AI ROI is the hidden costs of data preparation, integration, and maintenance that can spiral if left unchecked. IT leaders should continuously monitor expenses related to infrastructure, data management, training, and human resources.Continuous monitoring and feedback: AI performance should be tracked continuously against KPIs and adjusted based on real-world data. Regular feedback loops allow for continuous fine-tuning, ensuring your investment aligns with evolving business needs and delivers sustained value. Related:Overcoming GenAI Implementation RoadblocksRelated:Successful GenAI implementations depend on more than adopting the right technologythey require an approach that maximizes value while minimizing risk. For most IT leaders, success depends on addressing challenges like data quality, model reliability, and organizational alignment. Heres how to overcome common implementation hurdles:Align AI with high-impact business goals. GenAI projects should directly support business objectives and deliver sustainable value like streamlining operations, cutting costs, or generating new revenue streams. Define priorities based on their impact and feasibility.Prioritize data integrity. Poor data quality prevents effective AI. Take time to establish data governance protocols from the start to manage privacy, compliance, and integrity while minimizing risk tied to faulty data.Start with pilot projects. Pilot projects allow you to test and iterate real-world impact before committing to large-scale rollouts. They offer valuable insights and mitigate risk.Monitor and measure continuously. Ongoing performance tracking ensures AI remains aligned with evolving business goals. Continuous adjustments are key for maximizing long-term value.About the AuthorNishad AcharyaHead of Talent Network, TuringNishad Acharya leads initiatives focused on the acquisition and experience of the 3M global professionals on Turing's Talent Cloud. At Turing, he has led critical roles in Strategy and Product that helped scale the company to a Unicorn. With a B.Tech from IIT Madras and an MBA from Wharton, Nishad has a strong foundation in both technology and business. Previously, he led strategy & digital transformation projects at The Boston Consulting Group. Nishad brings a passion for AI and expertise in tech services coupled with extensive experience in sectors like financial services and energy.See more from Nishad AcharyaNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 12 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Getting a Handle on AI Hallucinations
    John Edwards, Technology Journalist & AuthorNovember 11, 20244 Min ReadCarloscastilla via Alamy Stock PhotoAI hallucination occurs when a large language model (LLM) -- frequently a generative AI chatbot or computer vision tool -- perceives patterns or objects that are nonexistent or imperceptible to human observers, generating outputs that are either inaccurate or nonsensical.AI hallucinations can pose a significant challenge, particularly in high-stakes fields where accuracy is crucial, such as the energy industry, life sciences and healthcare, technology, finance, and legal sectors, says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte. With generative AI's emergence, the importance of validating outputs has become even more critical for risk mitigation and governance, she states in an email interview. "While AI systems are becoming more advanced, hallucinations can undermine trust and, therefore, limit the widespread adoption of AI technologies."Primary CausesAI hallucinations are primarily caused by the nature of generative AI and LLMs, which rely on vast amounts of data to generate predictions, Ammanath says. "When the AI model lacks sufficient context, it may attempt to fill in the gaps by creating plausible sounding, but incorrect, information." This can occur due to incomplete training data, bias in the training data, or ambiguous prompts, she notes.Related:LLMs are generally trained for specific tasks, such as predicting the next word in a sequence, observes Swati Rallapalli, a senior machine learning research scientist in the AI division of the Carnegie Mellon University Software Engineering Institute. "These models are trained on terabytes of data from the Internet, which may include uncurated information," she explains in an online interview. "When generating text, the models produce outputs based on the probabilities learned during training, so outputs can be unpredictable and misrepresent facts."Detection ApproachesDepending on the specific application, hallucination metrics tools, such as AlignScore, can be trained to capture any similarity between two text inputs. Yet automated metrics don't always work effectively. "Using multiple metrics together, such as AlignScore, with metrics like BERTScore, may improve the detection," Rallapalli says.Another established way to minimize hallucinations is by using retrieval augmented generation (RAG), in which the model references the text from established databases relevant to the output. "There's also research in the area of fine-tuning models on curated datasets for factual correctness," Rallapalli says.Related:Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. "For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article." Such methods could help detect hallucinations better, she notes.Ammanath believes that detecting AI hallucinations requires a multi-pronged approach. She notes that human oversight, in which AI-generated content is reviewed by experts who can cross-check facts, is sometimes the only reliable way to curb hallucinations. "For example, if using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be easy to identify and the outcomes are lower stakes for the enterprise," Ammanath explains. Yet when it comes to applications that include mission-critical business decisions, error tolerance must be low. "This makes a 'human-in the-loop', someone who validates model outputs, more important than ever before."Related:Hallucination TrainingThe best way to minimize hallucinations is by building your own pre-trained fundamental generative AI model, advises Scott Zoldi, chief AI officer at credit scoring service FICO. He notes, via email, that many organizations are now already using, or planning to use, this approach utilizing focused-domain and task-based models. "By doing so, one can have critical control of the data used in pre-training -- where most hallucinations arise -- and can constrain the use of context augmentation to ensure that such use doesn't increase hallucinations but re-enforces relationships already in the pre-training."Outside of building your own focused generative models, one needs to minimize harm created by hallucinations, Zoldi says. "[Enterprise] policy should prioritize a process for how the output of these tools will be used in a business context and then validate everything," he suggests.A Final ThoughtTo prepare the enterprise for a bold and successful future with generative AI, it's necessary to understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, Ammanath says. "AI hallucinations help to highlight both the power and limitations of current AI development and deployment."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 11 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Next Steps to Secure Open Banking Beyond Regulatory Compliance
    Final rules from the Consumer Financial Protection Bureau further the march towards open banking. What will it take to keep such data sharing secure?
    0 Kommentare 0 Anteile 12 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Refreshing Your Network DR Plan
    Hurricane Helene was a reminder that network DR plans should be up to date. Here is a checklist to be prepared for the next disaster.
    0 Kommentare 0 Anteile 10 Ansichten
  • WWW.INFORMATIONWEEK.COM
    AI on the Road: The Auto Industry Sees the Promise
    Phong Nguyen, Chief AI Officer, FPT SoftwareNovember 8, 20244 Min ReadBrain light via Alamy StockGenerative AI is reshaping the future of the automotive industry. For industry leaders, this is not just some cutting-edge technology, but a strategic enabler poised to redefine the market landscape. With 79% of executives expecting significant AI-driven transformation within the next three years, harnessing GenAI is no longer optional but essential to remain competitive in a rapidly evolving sector.As AI continues to make its mark, it transforms how vehicles are designed, secures them against evolving threats, and enhances the overall driving experience. From enabling cars to anticipate and respond to cyber risks to accelerating innovation in design, and creating more personalized driving experiences, AI is redefining the key aspects of automotive development and usage.Stopping Security BreachesWith the automotive industry undergoing rapid transformation, the cybersecurity risks it encounters are also increasing and becoming more complex. High-profile breaches, such as the Pandora ransomware attack on a major German car manufacturer in March 2022, highlight the urgent need for more advanced security strategies. The attackers compromised 1.4TB of sensitive data, including purchase orders, technical diagrams, and internal emails, exposing vulnerabilities within the sector.AI-driven systems, including predictive and generative models, process vast amounts of data in real-time, making them indispensable for detecting unusual patterns that signal potential attacks. By continuously learning from past threats and dynamic adaptation to emerging risks, AI-driven systems detect intrusions and work alongside rule-based or supervised models to predict outcomes and simulate attack scenarios for training purposes. These include isolating compromised nodes, blocking malicious IP addresses, and mitigating threats before they escalate. For this reason, 82% of IT decision-makers intend to invest in AI-driven cybersecurity within the next two years.GenAI's abilities to generate data and patterns empower organizations to stay ahead of cybercriminals by anticipating attacks before they occur. A prime example is a leading automotive manufacturer that has significantly improved the security of its vehicle-to-everything (V2X) communication systems by leveraging generative models to simulate various network attack scenarios. This approach allows the network's defensive mechanisms to be trained and tested against imminent breaches.By utilizing models such as variational autoencoders (VAEs) and generative adversarial networks (GANs), which can generate synthetic attack data for simulations, the company could mimic various cyberattack scenarios. This allowed them to detect and mitigate up to 90% of simulated attacks during the testing phases, demonstrating a robust improvement in the overall security posture.Redefining Automotive DesignGenerative AI is ushering in a new wave of innovation in automotive architecture, transforming vehicle design with cutting-edge capabilities. By leveraging generative design techniques, AI-driven systems can automatically produce multiple design iterations, enabling manufacturers to identify the most efficient and effective solutions. GenAI design optimizes engineering and aesthetic decisions, helping manufacturers reduce development time and costs by up to 20%, according to Precedence Research, giving companies a competitive edge in expediting time-to-market.Toyota Research Institute has integrated a generative AI tool that enables designers to leap from a text description to design sketches by specifying stylistic attributes such as sleek, SUV-like, and modern. Tackling the challenges where designs frequently fell short of meeting engineering requirements, this tool integrates both aesthetic and engineering requirements. That allows designers and engineers to collaborate more effectively while ensuring that the final designs meet critical technical specifications. By bridging the gap between creative and engineering teams, companies can ensure that final designs meet essential specifications while enhancing both the speed and quality of design iterations, enabling faster and more efficient innovation.A More Connected and Personalized Driver ExperienceOriginal equipment manufacturers are transforming the customer experience with GenAI in an increasingly demanding market. Unlike traditional voice command systems that rely on static and pre-programmed responses, AI-powered voice technology offers dynamic, natural conversations. Integrated into vehicles, GenAI enhances GPS navigation, entertainment systems, and other in-car functionalities, allowing drivers to interact meaningfully with their vehicles AI assistant.Volkswagen, for example, became the first automotive manufacturer to integrate ChatGPT into its voice assistant IDA. This offers drivers an AI-powered system that manages everything from infotainment to navigation and answers general knowledge questions.As GenAI continues to become more advanced, delivering an exceptional driver experience is now a key differentiator for manufacturers looking to stay competitive. Despite the significant advancements in leveraging AI to enhance customer interactions, many original equipment manufacturers (OEMs) struggle to meet customer expectations. A recent Boston Consulting Group study revealed that, while the quality of the car-buying experience is the most critical decision factor for many customers, only 52% of customers say they are completely satisfied with their most recent car-buying experience. This underscores the need for OEMs to refine the integration of AI-driven systems further to enhance both the purchasing and ownership experience.About the AuthorPhong NguyenChief AI Officer, FPT SoftwarePhong Nguyenis FPT Softwares chief artificial intelligence officer. He is an influential leader with vast managerial and technical experience, listed as Top150 AI Executives by Constellation Research 2024. Phong holds a PhD from the University of Tokyo and a master's degree from Carnegie Mellon University.See more from Phong NguyenNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 11 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How AI is Reshaping the Food Services Industry
    John Edwards, Technology Journalist & AuthorNovember 8, 20246 Min ReadPanther Media GmbH via Alamy Stock PhotoThe food services industry might seem an unlikely candidate for AI adoption, yet the market, which includes full-service restaurants, quick-service restaurants, catering companies, coffee shops, private chefs, and a variety of other participants, is rapidly recognizing AI's immediate and long-term potential.AI in food services is poised for widespread adoption, predicts Colin Dowd, industry strategy senior manager at Armanino, an accounting and consulting firm. "As customer expectations shift, companies will be forced to meet their demands through AI solutions that are similar to their competitors," he notes in an email interview.Mike Kostyo, a vice president with food industry consulting firm Menu Matters, agrees. "It's hard to think of any facet of the food industry that isn't being transformed by AI," he observes via email. Kostyo says his research shows that consumers want lower costs --making it easier to customize or personalize a meal -- and faster service. "We tell our clients they should focus on those benefits and make sure they're clear to consumers when they implement new AI technologies."Seeking InsightsOn the research side, AI is being used to make sense out of the data deluge firms currently face. "Food companies are drowning in research and data, both from their own sources, such as sales data and loyalty programs, and from secondary sources," Kostyo says. "It's just not feasible for a human to wade through all of that data, so today's companies use AI to sift through it all, make connections, and develop recommendations."Related:AI can, for example, detect that spicy beverages are starting to catch on when paired with a particular flavor. "So, it may recommend building that combination into a new menu option or product," Kostyo says. It can do this constantly over time, taking into account billions of data points, creating innovation starting positions. "The team can take it from there, filling their pipeline with relevant products and menu items."Data collected from multiple sources can also be used to track customer preferences, providing early insights on emerging flavor trends. "For example, Campbells and Coca-Cola are currently using AI in tandem with food scientists to create new and exciting flavors and dishes for their customers based on insights collected from both internal and external data sources," Dowd says. "This approach can also be applied to restaurants and other locations that rely on recipes."Management and InnovationAI can also optimize inventory management. "AI is being used to determine when to order, and how much inventory a company needs to purchase, by analyzing historical data and current trends," Dowd says. "This allows the restaurant to maintain ideal inventory levels, reduce waste and better ensure that the restaurant always has the necessary ingredients."Related:When used as an innovation generator, AI can inspire fresh ideas. "Sometimes, when you get in that room together to come up with a new menu item or product, just facing down that blank page is the hardest part," Kostyo observes. "You can use AI for some starter ideas to work with." He says he loves to feed outlandish ideas into AI, such as, 'What would a dessert octopus look like?' "It may then develop this really wild dessert, like a chocolate octopus with different-flavored tentacles."Customer ExperienceAI promises to help restaurants provide a consistently positive experience to consumers, says Jay Fiske, president of Powerhouse Dynamics, an AI and IoT solutions provider for major multi-site food service firms, including Dunkin', Arby's, and Buffalo Wild Wings. He notes in an email interview that AI and ML can be used to flag concerning data, indicating potential problems, such as frozen meat going into the oven before it should, or predicting a likely freezer breakdown sometime within the next two weeks. "In these situations, facility managers have time to quickly preempt any issues that could cost them money, as well as their reputations with consumers," he says.Related:Another way AI is transforming the food services industry is by providing more efficient and reliable energy management. "This is important, because restaurants, ghost kitchens, and other food service businesses are extremely energy intensive," Fiske says. Refrigerators, freezers, ovens, dish washers, fryers, and air conditioners all consume massive amounts of power that can be controlled and optimized by AI.Future OutlookThe sky is the limit for food services industry AI, Kostyo states, noting that market players are taking various approaches. Some are excited about AI, and afraid to get left behind, so they're jumping right into these tools, while others are a little more skittish, concerned about ethical and privacy issues.Kostyo urges AI adopters to periodically monitor their customers' AI acceptance level. "In some ways, customers are very open to AI," he says. "Forty-six percent of consumers told us they're already using AI to assist with food decisions in some fashion, such as deciding what to cook or where to eat." Kostyo adds that 59% of surveyed consumers believe that AI can develop a recipe that's just as delicious as anyhuman chef could create.On the other hand, people still often crave a human touch. Kostyo reports that 66% of consumers would still rather have a dish that was created by a human chef. "Consumers frequently push back when they see AI being used in a way that would take a human job."Service FirstKostyo urges the food industry to use AI in ways that will enhance the overall consumer experience. "At the end of the day, we are the hospitality industry, and we need to remember that."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 11 Ansichten
  • WWW.INFORMATIONWEEK.COM
    GenAIs Impact on Cybersecurity
    Generative AI adoption is becoming ubiquitous as more software developers include the capability in their applications and users flock to sites like OpenAI to boost productivity. Meanwhile, threat actors are using the technology to accelerate the number and frequency of attacks.GenAI is revolutionizing both offense and defense in cybersecurity. On the positive side, it enhances threat detection, anomaly analysis and automation of security tasks. However, it also poses risks, as attackers are now using GenAI to craft more sophisticated and targeted attacks [such as] AI-generated phishing, says Timothy Bates, AI, cybersecurity, blockchain & XR professor of practice at University of Michigan and former Lenovo CTO. If your company hasnt updated its security policies to include GenAI, its time to act.According to James Arlen, CISO at data and AI platform company Aiven, GenAIs impact is proportional to its usage.If a bad actor uses GenAI, you'll get bad results for you. If a good actor uses GenAI wisely you'll get good results. And then there is the giant middle ground of bad actors just doing dumb things [like] poisoning the well and nominally good actors with the best of intentions doing unwise things, says Arlen. I think the net result is just acceleration. The direction hasn't changed, it's still an arms race, but now it's an arms race with a turbo button.Related:The Threat Is Real and GrowingGenAI is both a blessing and a curse when it comes to cybersecurity.On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats, says Erik Avakian, technical counselor at Info-Tech Research Group. These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes.The incorporation of voice impersonation and personalized content through deepfake attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against, says Avakian. GenAI can and is alsobeing used by adversaries to create advanced malware that adapts to defenses and evades current detection systems.Related:Pillar Securitys recent State of Attacks on GenAI report contains some sobering statistics about GenAIs impact on cybersecurity:90% of successful attacks resulted in sensitive data leakage.20% of jail break attack attempts successfully bypassed GenAI application guardrails.Adversaries require an average of just 42 seconds to execute an attack.Attackers needed only five interactions, on average, to complete a successful attack using GenAI applications.The attacks exploit vulnerabilities at every stage of interaction with GenAI systems, underscoring the need for comprehensive security measures. In addition, the attacks analyzed as part of Pillar Securitys research reveal an increase in both the frequency and complexity of prompt injection attacks, with users employing more sophisticated techniques and making persistent attempts to bypass safeguards.My biggest concern is the weaponization of GenAI -- cybercriminals using AI to automate attacks, create fake identities or exploit zero-day vulnerabilities faster than ever before. The rise of AI-driven attacks means that attack surfaces are constantly evolving, making traditional defenses less effective, says University of Michigans Bates. To mitigate these risks, were focusing on AI-driven security solutions that can respond just as rapidly to emerging threats. This includes leveraging behavioral analytics, AI-powered firewalls, and machine learning algorithms that can predict potential breaches.Related:In the case of deepfakes, Josh Bartolomie, VP of global threat services at email threat and defense solution provider Cofense recommends an out-of-band communication method to confirm the potentially fraudulent request, utilizing internal messaging services such as Slack, WhatsApp, or Microsoft Teams, or even establishing specific code words for specific types of requests or per executive leader.And data usage should be governed.With the increasing use of GenAI, employees may look to leverage this technology to make their job easier and faster. However, in doing so, they can be disclosing corporate information to third party sources, including such things as source code, financial information, customer details [and] product insight, says Bartolomie. The risk of this type of data being disclosed to third party AI services is high, as the totality of how the data is used can lead to a much broader data disclosure that could negatively impact that organization and their products [and] services.Casey Corcoran, field chief information security officer at cybersecurity services company Stratascale -- an SHI company, says in addition to phishing campaigns and deep fakes, bad actors are using models that are trained to take advantage of weaknesses in biometric systems and clone persona biometrics that will bypass technical biometric controls.[M]y two biggest fears are: 1) that rapidly evolving attacks will overwhelm traditional controls and overpower the ability of humans to distinguish between true and false; and 2) breaking the need to know and overall confidentiality and integrity of data through unmanaged data governance in GenAI use within organizations, including data and model poisoning, says Corcoran.Tal Zamir, CTO at advanced email and workspace security solutions provider Perception Point warns that attackers exploit vulnerabilities in GenAI-powered applications like chatbots, introducing new risks, including prompt injections. They also use the popularity of GenAI apps to spread malicious software, such as creating fake GenAI-themed Chrome extensions that steal data.Attackers leverage GenAI to automate tasks like building phishing pages and crafting hyper-targeted social engineering messages, increasing the scale and sophistication of attacks, says Zamir. Organizations should educate employees about the risks of sharing sensitive information with GenAI tools, as many services are in early stages and may not follow stringent security practices. Some services utilize user inputs to train models, risking data exposure. Employees should be mindful of legal and accuracy issues with AI-generated content, and always review it before sharing, as it could embed sensitive information.Bad actors can also use GenAI to identify zero days and create exploits. Similarly, defenders can also find zero days and create patches, but time is the enemy: hackers are not encumbered by rules that businesses must follow.[T]here will likely still be a big delay in applying patches in a lot of places. Some might even require physically replacing devices, says Johan Edholm, co-founder, information security officer and security engineer at external attack surface management platformprovider Detectify. In those cases, it might be quicker to temporarily add things between the vulnerable system and the attacker, like a WAF, firewall, air gapping, or similar, but this won't mitigate or solve the risk, only reduce it temporarily. "Make Sure Company Policies Address GenAIAccording to Info-Tech Research Groups Avakian, sound risk management starts with general and AI specific governance practices that implement AI policies.Even if our organizations have not yet incorporated GenAI technologies or solutions yet into the environment, it is likely that our own employees have experimented with it or are using AI applications or components of it outside the workplace, says Avakian. As CISOs, we need to be proactive and take a multi-faceted approach to implementing policies that account for our end-user acceptable use policies as well as incorporating AI reviews into our risk assessment processes that we already have in place. Our security policies should also evolve to reflect the capabilities and risks associated with GenAI if we don't have such inclusions in place already.Those policies should span the breadth of things in GenAI usage, ranging from AI training that covers data protection to monitoring to securing new and existing AI architectural deployments. Its also important that security, the workforce, privacy teams and legal teams understand AI concepts, including the architecture, privacy and compliance aspects so they can fully vet a solution containing AI components or features that the business would like to implement.Implementing these checks into a review process ensures that any solutions introduced into the environment will have been vetted properly and approved for use and any risks addressed prior to implementation and use, vastly reducing risk exposure or unintended consequences, says Avakian. Such reviews should incorporate policy compliance, access control reviews, application security, monitoring and associated policies for our AI models and systems to ensure that only authorized personnel can access, modify or deploy them into the environment. Working with our legal teams and privacy officers can help ensure any privacy and legal compliance issues have been fully vetted to ensure data privacy and ethical use.What if your companys policies have not been updated yet? Thomas Scanlon, principal researcher at Carnegie Mellon University's Software Engineering Institute recommends reviewing exemplar policies created by professional societies to which they belong or consulting firms with multiple clients.The biggest fear for GenAIs impact on cybersecurity is that well-meaning people will be using GenAI to improve their work quality and unknowingly open an attack vector for adversaries, says Scanlon. Defending against known attack types for GenAI is much more straightforward than defending against accidental insider threats.Technology spend and risk managementplatform Flexera established a GenAI policy early on, but it became obvious that the policy was quickly becoming obsolete.GenAI creates a lot of nuanced complexity that requires fresh approaches for cybersecurity, says Conal Gallagher, CISO & CIO of Flexera. A policy needs to address whether the organization allows or blocks it. If allowed, under what conditions? A GenAI policy must consider data leakage, model inversion attacks, API security, unintended sensitive data exposure, data poisoning, etc. It also needs to be mindful of privacy, ethical, and copyright concerns.To address GenAI as part of comprehensive risk management, Flexera formed an internal AI Council to help navigate the rapidly evolving threat landscape.Focusing efforts there will be far more meaningful than any written policy. The primary goal of the AI Council is to ensure that AI technologies are used in a way that aligns with the companys values, regulatory requirements, ethical standards and strategic objectives, says Gallagher. The AI Council is comprised of key stakeholders and subject matter experts within the company. This group is responsible for overseeing the development, deployment and internal use of GenAI systems.Bottom LineGenAI must be contemplated from end user, corporate risk and attacker perspectives. It also requires organizations to update policies to include GenAI if they havent done so already.The risks are generally two-fold: intentional attacks and inadvertent employee mistakes, both of which can have dire consequences for unprepared organizations. If internal policies have not been reviewed with GenAI specifically in mind and updated as necessary, organizations open the door to attacks that could have been avoided or mitigated.
    0 Kommentare 0 Anteile 17 Ansichten
  • WWW.INFORMATIONWEEK.COM
    ThreatLocker CEO Talks Supply Chain Risk, AIs Cybersecurity Role, and Fear
    Shane Snider, Senior Writer, InformationWeekNovember 7, 20246 Min ReadPictured: ThreatLocker CEO Danny Jenkins.Image provided by ThreatLockerIts no secret that cybersecurity concerns are growing. This past year has seen massive breaches, such as the breach of National Public Data (with 2.7 billion records stolen), and several large breaches of Snowflake customers such as Ticketmaster, Advance Auto Parts and AT&T. More than 165 companies were impacted by the Snowflake-linked breaches alone, according to a Mandiant investigation.According to CheckPoint research, global cyber-attacks increased by 30% in the second quarter of 2024, to 1,636 weekly attacks per organization. An IBM report says the average cost of a data breach globally rose 10% in 2024, to $4.8 million.So, its probably not that surprising that Orlando, Fla.-based cybersecurity firm ThreatLocker has ballooned to 450 employees since its 2017 launch. InformationWeek caught up with ThreatLocker CEO Danny Jenkins at the Gartner IT Symposium/XPO in Orlando last month.(Editors note: The following interview is edited for clarity and brevity.)Can you give us a little overview on what you were talking about at the event?What were talking about is that when youre installing software on your computer, that software has access to everything you have access to, and people often dont realize if they download that game, and there was a back door in that game, if there was some vulnerability from that game, it could potentially steal my files, grant someone access to my computer, grab the internet and send data. So, what we were really talking about was the supply chain risk. The biggest thing is vulnerabilities: The things a vendor didnt intend to do, but accidentally granted someone access to your data. You can really enhance your security through sensible controls and limiting access to those applications rather than trying to find every bad thing in the world.Related:AI has been the major reoccurring theme throughout the symposium. Can you talk a little about the way we approach these threats and how that is going to change as more businesses adopt emerging technologies like GenAI?Whats interesting is that were actually doing a session on how to create successful malware, and were going to talk about how were able to use AI to create undetectable malware versus the old way. If you think about AI, and you think about two years ago, if you wanted to create malware, there were a limited number of people in the world that could do that -- youd have to be a developer, youd have to have some experience, youd have to be smart enough to avoid protections. That pool of people was quite small. Today, you can just ask ChatGPT to create a program to do whatever you want, and it will spit out the code instantly. The amount of people that have the ability to create malware has now drastically increased the way to defend against that is to change the way you think about security. The way most companies think about security now is theyre looking for threats in their environment -- but thats not effective. The better way of approaching security is really to say, "Im just going to block what I dont need, and I dont care if its good and I dont care if its bad. If its not needed in my business, Im going to block it from happening."Related:As someone working in security, is the pace of AI adoption in enterprise a concern?I think the concern is the pace and the fear. AI has been around for a long time. What were seeing the last two years is generative AI and thats whats scaring people. If you think about self-driving cars, you think about the ability of machine learning, the ability to see data and manipulate and learn from that data. Whats scary is that the consumer is now seeing AI that produces and before it was always stuff in the background that you never really thought about. You never really thought about how your car is able to determine if somethings a trash can or if its a person. Now this thing can draw pictures and it can write documents better than I do, and create code. Am I worried about AI taking over the world from that perspective? No. But I am concerned about the tool set that weve now given people who may not be ethical.Related:Before, if you were smart enough to write successful malware, at least in the Western Hemisphere, youre smart enough to get a job and youre not going to risk going to jail. The people who were creating successful malware before, or successful cyber-attacks, were people in countries where there were not opportunities, like Russia. Now, you dont need to be smart enough to create successful cyber-attacks, and thats what concerns me. If you give someone who doesnt have capacity to earn a living access to tools that can allow them to steal data, the path they are going to follow is cyber crime. Just like other crime, when the economy is down and people dont have job, people steal and crime goes up. Cyber crime before was limited to people who had an understanding of technology. Now, the whole world will have access and thats what scares me -- and GenAI has facilitated that.How do you see your business changing in the next 5-10 years because of AI adoption?Ultimately, it changes the way people think about security, to where they have to start adopting more zero-trust approaches and more restrictive controls in their environment. Thats how it has to go -- there is no alternative. Before, there was a 10% chance you were going to get damaged by an attack, now its an 80% chance.If youre the CIO of an enterprise, how should you be looking at building out these new technologies and building on these new platforms? How should you be thinking about the security side of it?At the end of the day, you have to consider the internal politics of the business. And weve gone from a world where IT people and CIOs, who often come from introverted backgrounds where they dont communicate with boards, were seen as the people that make our computers work, and not the people who protect our business now the board is saying we have to bring a security department. I feel like if youre the CIO, you should be leading the conversation with your security team as a CIO, you should be driving that.What was one of your biggest takeaways from the event overall?I think the biggest thing Im seeing in the industry is fear is increasing, and rightly so. Were seeing more people willing to say, "I need to solve my problem. I know were sitting ducks right now." Thats because were on the technology side and we live and breathe this stuff. But what we dont necessarily always understand is what the customer perspective and customer viewpoint is and how do we solve their problems.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 16 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How to Find the Right CISO
    Great CISOs are in short supply, so choose wisely. Here are five ways to make sure you've made the right pick.
    0 Kommentare 0 Anteile 17 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Letting Neurodiverse Talent Shine in Cybersecurity
    Approximately 15% to 20% of people are neurodivergent, and that percentage could be even higher in STEM fields. Neurodiversity is a broad term that includes many different conditions: autism spectrum disorder (ASD); attention-deficit/hyperactivity disorder (ADHD); and dyslexia, to name just a few.As cybersecurity stakeholders continue to discuss filling the talent gap and tackling todays security challenges, neurodiverse talent is a valuable resource. But attracting and working with this talent requires leaders to recognize the different needs of neurodivergent people and to foster work environments that make the most of their skills.Neurodiversity as an AssetMany major companies, such as Microsoft and SAP, recognize the value of neurodiverse talent and have formal recruiting programs. Jodi Asbell-Clarke, PhD, heard firsthand from companies with these kinds of hiring initiatives as she conducted research for her book on teaching neurodivergent people in STEM.I expected to hear something like, Oh, the CEOs nephew was autistic, and we wanted to do the right thing. I expected to hear things about philanthropy and equity, and that w as not what I heard at all, Asbell-Clarke, a senior leader and research scientist with TERC, a nonprofit focused on advancing STEM education, told InformationWeek. They were saying it's because the talent. We consider neurodiversity in our workforce our competitive advantage. These are the most persistent and creative and systematic problem solvers.Related:How can that talent be put to work in the cybersecurity workforce?Ian Campbell was diagnosed with major depressive disorder and generalized anxiety early in his life. Then, at the start of the pandemic, he was diagnosed as autistic. Cybersecurity was not his first career. He was providing tech support for the US House of Representatives before he made the switch to security. Currently, he is a senior security operations engineer at DomainTools, a domain research service company.Throughout his career, Campbell has found hyperfocus to be one of his strengths. Scrolling through tens of thousands of things, of log files, hyper-focusing on that, and being able to intuitively pattern match or detect pattern deviations was a huge benefit in both tech support and security, he says.Megan Roddie-Fonseca, senior security engineer at cloud monitoring as a service company Datadog, is autistic and has ADHD. She shares how productivity is one of her biggest strengths.I find efficient ways to do things, she says. I use that efficiency to be able to tackle tasks in a way that some people might not get the same amount of work done in the same amount of time.Related:Challenges in the WorkplaceWhile awareness of neurodiversity, and the nuance within that very broad term, is growing, there are still plenty of potential challenges in the workplace.Neurodivergent people face the tricky question of disclosure. Should they tell their managers and coworkers about their diagnoses? Neurodiversity is more openly discussed, but that doesnt mean there arent people who will misunderstand or react to disclosure negatively.A lot of people I know who are neurodivergent haven't come out as neurodivergent because they don't want to be seen that way, says Campbell. They don't want, frankly, their careers limited by someone who has a poor view of neurodivergence.The decision to conceal neurodivergent traits, known as masking, can be a difficult undertaking.Masking is basically suppressing your own neurodivergent urges and needs for the sake of function in a world that's not built for us, and masking is incredibly tiring, says Campbell.The decision to disclose or not is a personal choice, one that is likely influenced by the level of support people can expect from a workplace.Related:The way people communicate at work, for example, can potentially lead to misunderstandings. One study using the classic game of telephone -- a group passes information to one another down a line of several people -- illustrates these potential challenges.The study broke its subjects into three groups of people: autistic, non-autistic, and mix of both. The first two groups exhibited the same skill level relating to information transfer. But communication problems arose in the mixed group.In a cybersecurity workplace, neurotypical and neurodiverse people are going to need to find ways to communicate with one another effectively. Some work environments will foster opportunities to learn how to best build those communication pathways. Some wont.The physical aspects of the work environment can also be a challenge for neurodivergent people who have sensory processing issues. The lighting and sound levels of an office, for example, can result in sensory overwhelm for some people.Hiring and Supporting Neurodiverse TalentEnterprises can attract neurodiverse talent through formal hiring programs or by working with external organizations, such as Specialisterne. Regardless of the approach, partnered or solo, hiring managers and cybersecurity team leaders need to evaluate and adapt their strategies.During the interview process, Asbell-Clarke recommends matching that short experience to the work you hope to see in the actual work environment. If you are hiring someone who will be conducting highly detailed work under time constraints, mirror that process when evaluating candidates.If you want to see people's best problem-solving, give them the time and space to solve a task and then ask them about how they did it, she says.In the cybersecurity work environment, managers will find that getting the best work from their neurodivergent workers will require varying approaches.Neurodiversity is this massive spectrum, says Jackie McGuire, senior security strategist at Cribl, a unified data management platform. It can be confusing as a manager because you can have two team members who are on exact opposite ends of that spectrum who need completely polar opposite things.For example, one neurodivergent person may thrive in a structured environment, while another may do their best work with a high degree of freedom. Additionally, the ways neurodivergent people best receive and respond to feedback can differ.Taking that nuanced approach to management can not only benefit neurodivergent works but cybersecurity teams as a whole. Asbell-Clarke offers some questions that managers can ask their workers.What are the conditions that will make you the best problem solver? What do you need to have your talent shine? she says. Ask that of everyone, not just the neurodivergent.Direct, clear communication is one of the most valuable strategies for empowering cybersecurity teams with both neurodivergent and neurotypical people. For example, teams can commit to keeping clear notes and highlighting action items from meetings to ensure everyone is on the same page.Creating the kind of environment that is responsive to the different needs of its employees is an iterative process. Over time, workplaces can become more supportive of neurodiverse talent and encourage them to do their best work.Employers can encourage their neurodivergent learners to unmask by removing any stigma that may come along with that. Not only is it about adapting the workplace, it's also about adapting the culture, says Asbell-Clarke.Neurodiversity and Navigating the WorkplaceHow can neurodivergent people play a role in shaping cybersecurity workplaces? People, such as McGuire, Campbell, Roddie-Fonseca, who speak up can increase awareness of neurodiversity and its tremendous value to employers. But not everyone is in the position to be an advocate.Unfortunately, the people who would benefit the most from accommodations are oftentimes also the people the least likely to ask for them or the least able to initiate that conversation, McGuire points out.But that doesnt mean nothing can be done. Recognizing your neurodiversity can be an important step forward. Do what you can to educate yourself more on what neurodiversity is and the way it manifests and the types of support you can provide yourself, McGuire recommends.Connecting with other neurodivergent people, either at work or industry events, can be a helpful way to discuss navigating the workplace.Some companies have formal neurodiversity working groups. McGuire, who has ADHD and autism, helped co-found a neurodiversity employee resource group. One of our initial focuses is what can we do to help neurodiverse people better advocate for themselves at work, she shares.If a company doesnt have one of these groups, look for ways to create an informal one. The way neurodiversity manifests in different people, if you get more than a couple of neurodiverse people together you will get one of them who is a great advocate, says McGuire.Roddie-Fonseca didnt consider herself much of an advocate until her manager suggested she submit a talk about her experience as a neurodivergent individual in cybersecurity at the hacker and security conference Defcon.Attending cybersecurity industry can events can help neurodivergent people connect and discuss their workplace experiences and be a valuable tool for career development. There's a lot of competition for jobs at times and who you know does make an impact, says Roddie-Fonseca.Accommodations can be an important way to ensure neurodiverse people can do their best work but having that conversation can be uncomfortable for both the people asking and the people listening.Everybody's afraid of accommodations, but if we want to pull the amazing strengths from these neurodiverse people we have to be willing to invite things like accommodations and be flexible with them, says Campbell.Building a career in cybersecurity, or any other industry, takes time and often trial and error. There is no guarantee that a workplace will be the right fit.Understand that there are organizations and managers out there that will support you and will value you for who you are, not who they want you to be, says Roddie-Fonseca. Continue pursuing the opportunities to find a place where you will be happy and comfortable and thrive versus accepting a place that doesn't truly value you for the strengths you do have.
    0 Kommentare 0 Anteile 36 Ansichten
  • WWW.INFORMATIONWEEK.COM
    The Current Top AI Employers
    John Edwards, Technology Journalist & AuthorNovember 6, 20246 Min Readtanit boonruen via Alamy Stock PhotoWhile the unemployment rate for IT professionals rose to 6% in August, up from 5.6% the prior month, the situation is far brighter for AI experts.The AI job market has shown resilience and growth, especially in the first half of 2024, says Antti Karjalainen, an analyst with WilsonHCG, a global executive search and talent consulting firm. "Despite some fluctuations, the demand for AI professionals remains robust, driven by increased investments in AI technologies and projects," he observes in an online interview.Amazon currently leads the pack with 1,525 AI-related employees, primarily operating in the e-commerce and cloud computing sectors, according to data from WilsonHCGs talent intelligence and labor market analytics platform. Meta follows closely with 1,401 employees, while Microsoft is next with 1,253 employees in AI-related roles. "As expected, Apple and Alphabet also have significant numbers with 1,204 and 970 employees, respectively," Karjalainen notes.TalentNeuron, a global labor market analytics provider, breaks down the market somewhat differently. "Globally, the top five AI employers are Google, Capital One, Amazon, ByteDance, and TikTok," says David Wilkins, the firm's chief product and marketing officer. "Of note, Amazon saw a 519% increase in AI job postings year-over-year, and Google saw a 367% increase," he observes in an online interview. "Out of the top 20 AI employers, Reddit saw the largest year-over-year increase at 1,579%."Related:While the US is a strong market for AI talent, there's a significant shortage of AI specialists relative to the growing demand, Wilkins says. "So, companies, Google among them, have expanded overseas for talent." TalentNeuron's latest report on tech talent hubs found that demand growth is highest in emerging, lower-cost markets, such as the Indian cities of Pune and Hyderabad, as organizations seek to strategically place AI capabilities.Sought-After SkillsThe most sought-after skills in AI job postings, according to WilsonHCG data, include deep learning, machine learning model development, computer vision, generative AI, and natural language processing (NLP), Karjalainen says. "These skills are crucial for developing advanced AI systems and applications." He adds that advanced algorithm development, model deployment and productionization (the process of turning a prototype into something that can be mass-produced), and AI-specific programming languages, such as TensorFlow, PyTorch, and Keras, are also highly valued by employers.Related:Many employers also value proficiency in programming languages, such as Python, MATLAB, C++, and Java, as well as data analysis and statistical modeling talents. "These skills are foundational for any AI-related role and are necessary for developing, testing, and deploying AI models," Karjalainen says. Having the ability to work with large datasets, perform data mining, and apply statistical techniques is also crucial, he notes. "Employers are looking for candidates who can not only build AI models but also interpret and analyze the results to drive business decisions."Top FieldsWilsonHCG finds that the computer software industry leads with 4,135 AI professionals, indicating a strong demand for AI talent in software development and related services. Following closely is the IT and services sector, which employs 3,304 AI professionals. "This sector includes companies that provide IT consulting, system integration, and managed services, all of which are increasingly incorporating AI into their offerings," Karjalainen says.With 2,176 individuals working in the area, research organizations also have a significant number of AI professionals. This sector includes academic institutions, research labs, and private research firms focused on advancing AI technologies, Karjalainen says. Financial services, with 819 AI professionals, is yet another key sector, as banks, insurance companies and investment firms leverage AI for risk management, fraud detection, and customer service. Meanwhile, the internet industry, which includes companies providing online services and platforms, employs 635 AI professionals, reflecting the importance of AI in enhancing user experiences and optimizing operations.Related:Karjalainen says that other fields with significant AI employment include higher education (444 professionals), biotechnology (384 professionals), and mechanical or industrial engineering (378 professionals). The hospital and health care sector employs 324 AI professionals, highlighting the growing use of AI in medical diagnostics, treatment planning, and patient care. The automotive industry, with 320 AI professionals, is also a key player, particularly in the development of autonomous vehicles and advanced driver-assistance systems. Other important fields employing AI professionals include management consulting, electrical/electronic manufacturing, and semiconductors.Salary TrendsWilsonHCG data shows that AI job postings consistently offer higher salaries than non-AI IT postings. For instance, in July 2024, the average advertised salary for AI jobs was $166,584, while for non-AI IT jobs the average was $110,005. The comparison represents a difference of $56,579, or 51.4%.Looking at the annual median salary, AI jobs offer $150,018 compared to $108,377 for non-AI IT jobs, resulting in a difference of $41,641, or 38.4%, Karjalainen says. "This trend is consistent across various months, with AI job salaries consistently outpacing those of non-AI IT jobs by a substantial margin."Wilkins reports that top US AI employers offer a median base salary of $183,250, according to TalentNeuron salary data. The median base salary for US AI jobs overall is $143,000. In comparison, the US Bureau of Labor Statistics in May 2023 reported a median annual wage of $104,420 for computer and information technology occupations.Overall, the data suggests that top AI employers generally pay more than other employers, particularly in the IT sector, Karjalainen says. "This higher compensation reflects the specialized skills and expertise required for AI roles, as well as the high demand for AI talent in the job market"Talent HubsAccording to WilsonHCG statistics, California's San Francisco-Oakland-Hayward, metro area has 10,740 AI professionals, making it the leading AI talent hub. In second place with 5,422 AI professionals is the New York-Newark-Jersey City-NY-NJ-PA region. "This area is a significant center for finance, media, and technology, attracting a diverse range of AI talent," Karjalainen notes. The Seattle-Tacoma-Bellevue, Washington metro area, with 3,139 AI professionals, is another key location, driven by the presence of major tech companies and a strong innovation culture.About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 36 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Iranian Threat Actors Ramp Up Ransomware, Cyber Activity
    This summer, the Federal Bureau of Investigation (FBI), Cybersecurity and Infrastructure Security Agency (CISA), and the Department of Defense Cyber Crime Center (DC3) released a joint advisory on Iran-based threat actors and their role in ransomware attacks on organizations in the US and other countries around the globe.With the US presidential election coming to a close, nation state activity from Iran could escalate. In August, Iranian hackers compromised Donald Trumps presidential campaign. They leaked compromised information and sent stolen documents to people involved in Joe Bidens campaign, CNN reports.What are some of the major threat groups associated with Iran, and what do cybersecurity stakeholders need to know about them as they continue to target US organizations and politics?Threat GroupsA number of advanced persistent threat (APT) groups are affiliated with the Islamic Revolutionary Guard Corps (IRGC), a branch of the Iranian armed forces. [Other] relatively skilled cyber threat actor groups maintain arms distance length from the Iranian government, says Scott Small, director of cyber threat intelligence at Tidal Cyber, a threat-informed defense company. But they're operating pretty clearly on behalf [of] or aligned with the objectives of the Iranian government.Related:These objectives could be espionage and information collection or simply disruption. Hack-and-leak campaigns, as well as wiper campaigns, can be the result of Iranian threat actor activity. And as the recent joint advisory warns, these groups can leverage relationships with major ransomware groups to achieve their ends.Look at the relationships [of] a group like Pioneer Kitten/Fox Kitten. They're partnering and collaborating with some of the world's leading ransomware groups, says Small. These are extremely destructive malware that have been extremely successful in recent years at disrupting systems.The joint advisory highlights Pioneer Kitten, which is also known by such names as Fox Kitten, Lemon Sandstorm, Parisite, RUBIDIUM, and UNC757, among others. The FBI has observed these Iranian cyber actors coordinating with groups like ALPHV (also known as BlackCat), Ransomhouse, and NoEscape. The FBI assesses these actors do not disclose their Iran-based location to their ransomware affiliate contacts and are intentionally vague as to their nationality and origin, according to the joint advisory.Many other threat groups affiliated with Iran have caught the attention of the cybersecurity community. In 2023, Microsoft observed Peach Sandstorm (also tracked as APT33, Elfin, Holmium, and Refined Kitten) attempting to deliver backdoors to organizations in the military-industrial sector.Related:MuddyWater, operating as part of Irans Ministry of Intelligence and Security (MOIS), has targeted government and private sector organizations in the oil, defense, and telecommunications sectors.TTPsThe tactics, techniques, and procedures (TTPs) leveraged by Iranian threat actor groups are diverse. Tidal Cyber tracks many of the major threat actors; it has an Iran Cyber Threat Resource Center. Small found the top 10 groups his company tracks were associated with approximately 200 of the MITRE ATT&CK techniques.Certainly, this is just one data set of known TTPs, but just 10 groups being associated with about a third of well-known TTPs, it just demonstrates the breadth of techniques and methods used by these groups, he says.The two main avenues of compromise are social engineering and exploitation of unpatched vulnerabilities, according to Mark Bowling, chief information, security, and risk officer atExtraHop, a cloud-native cybersecurity solutions company.Social engineering conducted via tactics like phishing and smishing can lead to compromised credentials that grant threat actors system access, which can be leveraged for espionage and ransomware attacks.Related:Charming Kitten (aka CharmingCypress, Mint Sandstorm, and APT42), for example, leveraged a fake webinar to ensnare its victims, policy experts in the US, Europe, and Middle East.Unpatched vulnerabilities, whether directly within an organizations systems or its larger supply chain, can also be a useful tool for threat actors.They find that vulnerability and if that vulnerability has not been patched quickly, probably within a week, an exploit will be created, says Bowling.The joint advisory listed several CVEs that Iranian cyber actors leverage to gain initial access. Patches are available, but the advisory warns those will not be enough to mitigate the threat if actors have already gained access to vulnerable systems.Potential VictimsWho are the potential targets of ongoing cyber campaigns of Iran-based threat actors? The joint advisory highlighted defense, education, finance, health care, and government as sectors targeted by Iran-based cyber actors.What is the case with a lot of nation-state-sponsored threat activity right now, it's targeting a little bit of anyone and everyone, says Small.As the countdown to the presidential election grows shorter, threat actors could be actively carrying out influence campaigns. This kind of activity is not novel. In 2020, two Iranian nationals posed as members of the far-right militant group the Proud Boys as a part of a voter intimidation and influence campaign. Leading up to the 2024 election, we have already seen the hack and leak attack on the Trump campaign.Other entities could also fall prey to Iranian threat actor groups looking to spread misinformation or to simply create confusion. It's possible that they may target government facilities, state or local government, just to add more chaos to this already divided general election, says JP Castellanos, director of threat intelligence for Binary Defense, a managed detection and response company.Vulnerable operational technology (OT) devices have also been in the crosshairs of IRGC-sponsored actors. At the end of 2023, CISA, along with several other government agencies, released an advisory warning of cyber activity targeting OT devices commonly used in water and wastewater systems facilities.In 2023, CyberAv3ngers, an IRGC-affiliated group, hacked an Israeli-made Unitronics system at a municipal water authority in Pennsylvania. In the wake of the attack, screens at the facility read: "You Have Been Hacked. Down With Israel, Every Equipment 'Made In Israel' Is CyberAv3ngers Legal Target."The water authority booster station was able to switch to manual operations, but the attack serves as an ominous warning.The implications there were pretty clear that something else further could have been done tampering with the water levels and safety controls, things along those lines, says Small.As the Israel-Hamas war continues, organizations in Israel and allied countries could continue to be targets of attacks associated with Iran.The education sector has also seen elevated levels of Iran-based cyber activity, according to Small. For example, Microsoft Threat Intelligence observed Mint Sandstorm crafting phishing lures to target high-profile individuals at research organizations and universities.Escalating ThreatsIran is one of many nation state threat actors actively targeting public and private sector organizations in the US. Russia, North Korea, and China are in the game, too. In addition to politically motivated threat actors, enterprise leaders must contend with criminal groups motivated not by any specific flag but purely by profit.As a cyber defender, how much bandwidth do you have? How many groups can you possibly keep track of? We're always talking about prioritization, says Small.Castellanos points out that Iran is sometimes considered a lower tier threat, but he thinks that is a mistake. I would strongly recommend to not treat Iran as something not to worry about, he warns.Enterprise leaders are increasingly pressed to consider geopolitical tensions, the risks their organizations face in that context, and the resources available to mitigate those risks.Bowling stresses the importance of investing in talent, processes, and technology in the cybersecurity space.You can have good processes, and you can have good people. But if you don't have the technology that allows you to see the attackers and allows you to respond faster to the attack, then you're not going to be successful, he says.As enterprises continue to combat cyber threats from Iran, as well as other nation states and criminal groups, information sharing remains vital. That sharing of information [and] intelligence, that's actually what leads to a lot of these alerts being published and then it becomes usable by the rest of the community, says Small.
    0 Kommentare 0 Anteile 14 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Is the CHIPS Act in Jeopardy? What the US Election Could Mean for Semiconductor Industry
    Shane Snider, Senior Writer, InformationWeekNovember 5, 20244 Min ReadZoonar GmbH via Alamy Stock PhotoTodays election, which pollsters say is neck and neck in the presidential race between Republican candidate and former US President Donald J. Trump and Democratic candidate, US Vice President Kamala Harris, could determine the future of the $52.7 billion CHIPS and Science Act.The CHIPS Act, signed into law two years ago, is already doling out some of the $39 billion aimed at semiconductor manufacturing, with another 13.2 billion earmarked for R&D and workforce development. The Biden Administration has touted the effort as one of its major accomplishments.Trump recently took to the Joe Rogan podcast to declare: That chip deal is so bad. Trump says the US should instead impose tariffs he says would force more chips to be produced in the US. Others say tariffs, which are charged to the importing company and not the exporting country, would have the opposite effect.House Speaker Mike Johnson, in remarks that he recently walked back, suggested that the GOP would "probably" try to repeal the legislation. He later said that he misunderstood the question after pushback from GOP Rep. Brandon Williams, a New York state congress member locked in a tough race with Democrat candidate state Sen. John Mannion.Johnson told reporters that a repeal is not in the works, but there could be legislation to further streamline and improve the primary purpose of the bill -- to eliminate its costly regulations and Green New Deal requirements.Related:Billions at StakeAccording to the US Commerce Department, the CHIPS Act is expected to boost US chip manufacturing from zero to 30% of the worlds leading-edge chip supply by 2032. Chip companies like Intel, Micron, Samsung, and TSMC have announced massive US manufacturing upgrades and new construction.Last year, the Commerce Department said more than 460 companies had signaled interest in winning subsidies through the bill. The US has chosen 31 underdog tech hubs for potential hotspots that would funnel CHIPS funding into areas outside of traditional tech corridors. Earlier this week, Albany NanoTech Complex was selected as the first CHIPS Act R&D flagship facility, winning $825 million in subsidies to fund a new Extreme Ultraviolet (EUV) Accelerator.US Sen. Mark Kelly, (D-Ariz), was a key sponsor of the CHIPS Act. Since 2020, Arizona netted more than 40 semiconductor deals, with $102 billion in capital investment and the potential for 15,700 jobs. TSMCs investment in Arizona stands at more than $65 billion. Intel is investing more than $32 billion in two new Arizona foundries (chip factories), and to modernize an existing fab.Related:Republicans are staking their political fortunes on the CHIPS Act as well. Sen. John Cornyn (R-TX), also co-authored the bill. And Sen. Marco Rubio (R-Fla.) and Sen. Tom Cotton have been vocal about China-US competition. The CHIPS act could shore up a domestic supply chain that gives North America a real advantage in the chip wars.While the CHIPS Act itself is not on any ballot measures for this election cycle, economic policies that impact power consumption and other key tech-important issues may impact the industry as well. In Arkansas, one ballot measure proposal concerning lottery funds could help create more skilled tech workers, for instance. In Maine, a ballot measure proposes issuing $25 million in bonds to fund research for IT industries.Bob ODonnell, president and chief analyst at TECHnalysis Research, says the future of US semiconductor manufacturing should not be a partisan issue. Its clear to me that the CHIPS Act is incredibly important and hopefully it will cross party lines, he says in a phone interview with InformationWeek. Theres no doubt there will be demand down the road. And theres no question that the geographical diversity of semiconductor manufacturing is way out of whack. This is a US necessity.Related:A Question of Workforce Readiness and R&DJohn Dallesasse, a professor of electrical and computer engineering at the University of Illinois Grainger College of Engineering, says funding from the CHIPS Act will be crucial to workforce and educational needs. It would be unfortunate if the US government were to backpedal on the investments in semiconductor technology enabled by the CHIPS and Science Act, he tells InformationWeek in an e-mail interview. While the [act] provides incentives for manufacturing, theres also a significant emphasis on new technology R&D and workforce development -- both of which will be needed to restore US competitiveness in semiconductors.He adds, Without the combination of new technology development and incentives to bring manufacturing back to the US, we will continue on the downward spiral which has brought us from a dominant force in semiconductor manufacturing to a country which only makes 12% of the world's chips.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 15 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Broadband Is On the Ballot
    Broadband is a high priority for both candidates. Harris will focus on federal programs while Trump will use private partnerships and fewer regulations.
    0 Kommentare 0 Anteile 19 Ansichten
  • WWW.INFORMATIONWEEK.COM
    What Can Computing Win or Lose at the Ballot Box?
    Election 2024: Here are downballet races for IT professionals to watch as the results come in.
    0 Kommentare 0 Anteile 19 Ansichten
  • WWW.INFORMATIONWEEK.COM
    5 Tips for Balancing Cost and Security in Cloud Adoption
    Manju Naglapur, SVP and General Manager, Unisys November 4, 20244 Min ReadPixabayIn todays fast-paced digital landscape, cloud services have become essential for organizations looking to accelerate business innovations and limit downtime. With these opportunities, however, businesses face the challenge of balancing cost savings with security -- two priorities often seen as opposing forces.While cutting costs is tempting, especially in times of economic uncertainty, the risks of inadequate security can far outweigh the immediate savings. A single breach can lead to financial losses, reputational damage, and hefty regulatory penalties, making security investments a strategic imperative rather than an optional expense.Navigating Cost and SecurityIn Q2 2024, global spending on cloud infrastructure services grew 19% year over year to reach $78.2 billion, according to Canalys. This expansion reflects a growing reliance on cloud services as organizations seek flexibility, scalability, and operational efficiency. While the market continues to offer significant opportunities for cost optimization, it also introduces various new security challenges that businesses must confront.Emerging trends like serverless computing and containerization drive cost savings by reducing infrastructure overhead and improving the efficiency of cloud environments. Serverless architectures, for example, allow businesses to operate without the need to manage physical servers, reducing the total cost of ownership. Containerization, similarly, enhances application portability and deployment speed, allowing businesses to optimize resources and scale more effectively.Related:However, with these benefits come potential vulnerabilities. While eliminating the need to manage infrastructure, serverless computing can expose organizations to security risks if the infrastructure is not properly configured. Misconfigured serverless environments can lead to data breaches, unauthorized access or service disruptions. Such issues will likely negate initial cost savings. Similarly, while offering agility, containerization introduces risks related to container isolation and management, as vulnerabilities in one container could potentially compromise others.In addition to the technical security challenges, organizations must navigate an increasingly complex regulatory environment when adopting cloud solutions. Data protection laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States impose strict requirements on how businesses handle and secure personal data. Non-compliance with these regulations can result in substantial fines and penalties, making robust security measures non-negotiable for companies operating in regulated industries.Related:Balancing PrioritiesIn reality, businesses should not view cost savings and security as opposing forces. By adopting a thoughtful approach, organizations can create a cloud strategy that achieves both. To effectively navigate this balance, consider the following five key strategies.1. Conduct comprehensive risk assessmentsBefore selecting a cloud provider, organizations should assess their specific security risks and compliance requirements. This evaluation will help identify areas where cost savings can be safely realized without compromising critical security measures. A thorough risk assessment ensures that organizations allocate resources appropriately, investing in security where needed most.2. Leverage managed servicesFor organizations lacking the resources or in-house expertise to manage complex cloud environments, partnering with managed service providers (MSPs) can offer a cost-effective solution. MSPs specializing in cloud infrastructure can offer targeted services like cloud migration support, security management, and optimization of cloud-native tools, all of which help to secure the environment while minimizing operational costs.Related:3. Implement continuous monitoringTo balance cost and security, organizations must maintain vigilant oversight of their cloud services. Continuous monitoring allows businesses to detect vulnerabilities early, optimize resource usage and ensure cost efficiencies. Regularly reviewing cloud resource usage also allows businesses to optimize spending on storage and computing resources, combining security with cost efficiency.4. Optimize cloud security configurationsCloud misconfigurations can lead to vulnerabilities, such as leaving sensitive data in unprotected storage buckets. Regular reviews and automated tools designed for cloud environments can help ensure security settings, such as access to control lists and encryption policies, are properly configured and updated. By ensuring configurations are correct and aligned with best practices, businesses can prevent incidents that may incur hefty fines or recovery costs.5. Invest in employee trainingTraining should focus on the unique security challenges of cloud environments, such as identity and access management, shared responsibility models, and how to manage cloud resources securely. Ensuring employees understand these cloud-centric security aspects reduces human errors that could expose vulnerabilities. Furthermore, a well-trained workforce can leverage cloud resources more effectively, maximizing the return on cloud investments.Looking AheadThe tension between cost savings and security is not just a technical issue; it is a strategic imperative for organizations to navigate in the digital era. As cloud adoption continues to accelerate, businesses must carefully maintain this delicate balance to ensure their bottom line and security posture remain strong.Organizations can achieve the best of both worlds by adopting a cloud strategy that incorporates risk assessments, continuous education, and effective resource allocation.About the AuthorManju NaglapurSVP and General Manager, Unisys Manju Naglapur is the senior vice president and general manager of cloud, applications and infrastructure solutions at Unisys. He leads a global business unit focused on cloud transformation, application services, cybersecurity, and data intelligence. Manju joined Unisys through its acquisition of CompuGain in 2021, where he served as vice president from 2010-2022, driving strategy, sales, and service delivery. He holds an M.S. in engineering from the New Jersey Institute of Technology and a B.S. in engineering from Bangalore University.See more from Manju NaglapurNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 24 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How Enterprises Use Cloud to Innovate
    Cloud utilization patterns continue to evolve as cloud providers introduce new capabilities and the competitive landscape evolves. Over time, businesses have been building a foundation for their future as they house more data, develop more cloud apps, and take advantage of services.We are going beyond the cloud adoption for cost benefits and cloud adoption for velocity benefits arguments. We are well into cloud adoption/XaaS adoption for innovation, says Shriram Natarajan, director, digital business transformation, at technology research and advisory firm, ISG. Enterprises can realize a super-return on their digitization and automation investments. By layering in AI to learn from the meta-data of previously digitized processes, they can further squeeze efficiencies and [advance] human augmentation.Cloud-delivered services enable companies to experiment more freely and execute new ideas faster and more efficiently than if they had to invest the capital expenses and time to build out on-prem IT infrastructure from scratch.There are a wide range of cutting-edge cloud services being delivered from the cloud that are transforming the competitive landscape, says David Boland, VP of cloud strategies at hot cloud storage company Wasabi Technologies, in an email interview. These include generative AI services, AI classification and recommendation, edge computing and IOT, quantum computing, advanced cloud storage solutions and cloud-based data analytics.Related:However, many organizations struggle to fully realize their vision due to a variety of strategic, operational and technical challenges.David Boland, WasabiOne of the most common challenges is managing cloud costs. Organizations often underestimate how quickly costs can escalate due to hidden costs, that results in a lack of visibility into cloud spending. Without proper monitoring, cloud budgets can spiral out of control, reducing the cost-effectiveness of cloud services, says Boland. Additionally, many organizations fall victim to vendor lock-in, where they become too dependent on a single cloud providers proprietary technologies and tools. This limits flexibility and makes it difficult to switch providers or use a multi-cloud strategy, hindering innovation and negotiation power.AI Service Adoption is RampantCompanies are increasingly becoming cloud-first, where everything from innovation to collaboration happens over a public or hybrid cloud. When companies harness the cloud, they can save on costly on-prem infrastructure, opening the door to investing in more strategic objectives like product innovation and global growth.Related:Without the power of the cloud, companies would have difficulty taking advantage of technologies like AI. Many of those services are cloud-based, which opens the door to advanced insights, automation and more creative ways to engage customers, says Jean-Phillipe Avelange, CIO of intelligent Internet platform Expereo, in an email interview. This is only possible when companies are doubling down on their cloud strategy.The key to using cloud effectively is developing clear objectives before adoption, mastering issues like privacy and security, and clearly understanding the impact on the workforce.Once those issues are resolved, AI can be used in many ways to increase productivity and develop never-before-realized insights into customers and the competitive landscape, says Avelange. However, were only at the beginning of this journey. Business and IT leaders and employees need to understand many facets of AI before they can consistently and effectively harness this technology.According to John Pettit, CTO at Google solution provider Promevo, AI and data analytics are critical to drive greater productivity across the stack.Related:Weve seen industry-tech startups challenging traditional business models by being more efficient and data-driven. These highly optimized business models require a lot of data and platforms that can scale with them, says Pettit in an email interview.According to Alex Perritaz, chief architect at high availability infrastructure provider InFlux Technologies, leading organizations use cloud computing to train the latest models, and innovation is mostly driven by AI.Using cloud solutions for these businesses makes sense as they don't need to commit to setting up the infrastructure but can use as they go the latest GPUs to train the latest models with as many parameters as they can fit in, allowing the companies to [stay] flexible and agile in their workflow, and remain at the cutting edge for their offerings, says Perritaz in an email interview. Many people [were] caught up on the high demand for computing, so many purchased and set up large infrastructures and the latest hardware. As NVIDIA rolls out new generations, they must refresh their hardware to keep up with the latest models. The obvious answer to being the most competitive in the market regarding service offerings is the price and the capacity of the infrastructure to run the largest AI models.John Samuel, global CIO and EVP at CGS (Computer Generated Solutions) says innovation is predominantly driven by cutting-edge technologies such as augmented and mixed reality (AR/XR), AI and now GenAI.Without the power of cloud computing, the cost of adopting these technologies to drive innovation can become prohibitive, says Samuel in an email interview. Cloud computing also allows companies to be more agile and benefit from the innovations offered by SaaS providers, who use the cloud to deliver their services to clients. The clouds consumption-based cost model enables companies to pilot and test innovations without making significant investments in hardware, software, and the associated build costs of creating technology for innovation from scratch.Companies are lowering costs and improving competitiveness using self-service generative AI services and agent-assist tools.These technologies can also rapidly surface insights from data, giving companies a competitive edge by enabling data-driven, agile decision-making, says Samuel.Driving the Most ValueTodays companies are using the cloud to become more agile, efficient, and secure. Cloud is capable of many things, from increasing data accessibility to scaling based on demand. Migrating to cloud enables companies to adjust to the changing dynamics within their operations. It also helps ensure everyone has the resources they need to do their jobs effectively.When employees are equipped with the necessary tools, they can focus on strategic thinking and innovation. The cloud essentially sets the foundation for todays innovative technologies like AI, says Promevos Pettit. Innovation is about taking new ideas and being able to quickly and consistently make them a reality. The cloud abstracts away the management of hardware, networking, security, and scalability. This allows innovators to focus on the specific implementation details of their concept and ultimately ship faster.Beyond AI, containers have added another layer of abstraction in defining consistent environments for workloads to run, allowing developers to build and test code in the same environment that will run in production, which accelerates testing and deployment loops and reduces errors.The further progression of infrastructure as code has allowed the whole stack in the cloud to be source-controlled and more easily managed, says Pettit. Tools like Terraform are the norm now and open applications [can] be even more portable in a multi-cloud setting. Serverless computing enables developers to write and execute event-driven code without managing servers, providing a pay-as-you-go pricing model and automatic scaling.Jean-Phillipe Avelange, ExpereoHowever, many organizations struggle to realize their vision due to various other factors, including the CFOs cost mindset, misalignment of business models and performing lift and shift operations instead of lift and modernize.Organizations that are successfully overcoming these barriers often have a clear KPI or metric that [they] intend to measure and a North Star for success. This will help organizations confront decisions that pull them in new directions, says Pettit. In addition, it will help them embrace a culture of innovation and experimentation, invest in the talents and skills of their employees, prioritize the customer experience, and adopt a cloud-first strategy to improve agility, scalability, and cost-efficiency.How to Measure SuccessToday, cloud services are being driven by business metrics centered on cost savings, operational efficiency, innovation, customer experience and, increasingly, sustainability goals.As cloud technology continues to evolve -- particularly with advancements in AI, edge computing, the focus will shift towards metrics that emphasize AI-driven decision-making, and deep personalization, says Wasabis Boland.Pettit says the initial metrics organizations choose tend to be some form of velocity or productivity proxies such as how frequently a team can ship updates to production, how quickly they can ship, or how successful they are at shipping. Another view focuses on the developers, such as whether they are satisfied with their environment, tools, and processes. Some people measure developer productivity by Jira tickets, work items, pull request comments, builds or other activity metrics.According to Pettit, while these metrics can all be useful, theyre irrelevant without a clear standard for success. For example, start with your customer and work backward, says Pettit. Perhaps you are an online retailer, and you want to increase customer satisfaction by providing an interactive shopping experience. In this case, you may measure a ratio of completed versus abandoned transactions. You align with your IT team on the new experience and decide that you want to be able to quickly test and deploy changes to the system. In this scenario, the IT team can design the system and pick some metrics that align with the business goals -- making the speed and success of deployments matter. The IT team can measure these and set a baseline to manage goals for improvement.Karan Bhagat, field CTO at global systems integrator Myriad360, says several metrics are evolving:Innovation is currently measured by the frequency of new product launches or feature releases. In the future, metrics around continuous deployment and integration could become more prevalent as agile practices mature.Scalability metrics, such as resource utilization and scaling events, track how well businesses adjust to demand. In the future, advanced predictive analytics could lead to proactive scaling, reducing downtime and improving performance.Performance and reliability are measured by service level agreements uptime and latency, which influence user satisfaction. In the future, enhanced AI monitoring may lead to real-time adjustments, improving reliability metrics significantly.Security and compliance metrics will likely become more critical.AI and automation impact metrics may quantify the impact of AI and automation on productivity and decision-making processes.Bottom LineCloud technology is an essential element in todays near real-time economy. Without it, organizations are hampered by the scope of their own equipment and the talent to manage it.Cloud usage has been steadily moving up the stack as organizations have migrated data, built cloud applications and are now experimenting and working with AI, AR, XR, and in some cases, robotics. The more a company does in the cloud, the more flexibility it must have to adapt to market changes, and the more services it has at its disposal to innovate.
    0 Kommentare 0 Anteile 24 Ansichten
  • WWW.INFORMATIONWEEK.COM
    The Intellectual Property Risks of GenAI
    Lisa Morgan, Freelance WriterNovember 1, 20249 Min ReadImar de Waard via Alamy StockGenerative AIs wildfire adoption is both a blessing and a curse. On one hand, many people are using GenAI to work more efficiently, and businesses are trying to scale it in an enterprise-class way. Meanwhile, the courts and regulators arent moving at warp speed, so companies need to be very smart about what theyre doing or risk intellectual property (IP) infringement, leakage, misuse and abuse.The law is certainly behind the business and technology adoptions right now, so a lot of our clients are entering into the space, adopting AI, and creating their own AI tools without a lot of guidance from the courts, in particular around copyright law, says Sarah Bro, a partner at law firm McDermott Will & Emery. Ive been really encouraged to see business and legal directives help mitigate risks or manage relationships around the technology and use, and parties really trying to be proactively thinking about how to address things when we dont have clear-cut legal guidance on every issue at this point.Why C-Suites and Boards Need to Get Ahead of This NowGenAI can lead to four types of IP infringement: copyright, trademark, patent, and trade secrets. Thus far, theres been more attention paid to the business competitiveness aspect of GenAI than the potential risks of its usage, which means that companies are not managing risks as adeptly as they should.Related:The C-suite needs to think about how employees are using confidential and proprietary data. What gives us a competitive advantage? says Brad Chin, IP partner at the Bracewell law firm. Are they using it in marketing for branding a new product or process? Are they using generative AI to create reports? Is the accounting department using generative AI to analyze data they might get from a third party?Historically, intellectual property protection has involved non-disclosure agreements (NDAs), and that has not changed. In fact, NDAs should cover GenAI. However, according to Chin, using the companys data, and perhaps others data, in a GenAI tool raises the question of whether the companys trade secrets are still protected.We dont have a lot of court precedent on that yet, but thats one of the considerations courts look at in a companys management of its trade secrets: what procedures, protocols, practices they put in place, so its important for C-suite executives to understand that risk is not only of the information their employees are putting into AI, but also the AI tools that their employees may be using with respect to someone elses information or data, says Chin. Most company NDAs and general corporate agreements dont have provisions that account for the use of generative AI or AI tools.Related:Some features of AI development make GenAI a risk from a copyright and confidentiality standpoint.To train machine learning models properly, you need a lot of data. Most savvy AI developers cut their teeth in academic environments, where they werent trained to consider copyright or privacy. They were simply provided public datasets to play with, says Kirk Sigmon, an intellectual property lawyer and partner at the Banner Witcoff law firm, in an email interview. As a result, AI developers inside and outside the company arent being limited in terms of what they can use to train and test models, and theyre very tempted to grab whatever they can to improve their models. This can be dangerous: It means that, perhaps more than other developers they might be tempted to overlook or not even think about copyright or confidentiality issues.Similarly, the art and other visual elements used in generative AI, such as Gemini and DALL-E, may be copyright protected, and logos may be trademark protected. GenAI could also result in patent-related issues, according to Bracewells Chin.Related:A third party could get access to information inputted into generative AI, which comes up with five different solutions, says Chin. If the company that has the information then files patents on that technology, it could exclude or preclude that original company from getting that part of the market.Boards and C-Suites Need to Prioritize GenAI DiscussionsBoards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now.Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research, says Banner Witcoffs Sigmon. Since such uses dont necessarily make themselves obvious, you cant really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.Emily Poler, founding partner at Poler Legal, wonders what would happen if the GenAI platform a company uses becomes unavailable.Nobody knows whats going to happen in the various cases that have been brought against companies offering AI platforms, but one possible scenario is that OpenAI and other companies in the space have to destroy the LLM theyve created because the LLMs and/or the output from those LLMs amounts to copyright infringement on a massive scale, says Poler in an email interview. Relatedly, what happens to your companys data if the generative AI platform youre using goes bankrupt? Another company could buy up this data in a bankruptcy proceeding and your company might not have a say.Another point to consider is whether the generative AI platform can use a companys data to refine its LLMs, and if so, whether there are any protections against the companys confidential information being leaked to a third party. Theres also the question of how organizations will ensure employees dont rely on AI-generated hallucinations in their work, she says.Time to Update PoliciesBracewells Chin recommends doing an audit before creating or updating a policy so its clear how and why employees are using GenAI, for what purpose and what they are trying to achieve.The audit should help you understand the who, what, why, when, and where questions and then putting best practices [in place] -- you can use it, you cant use it, you can use it with these certain restrictions, says Chin. Education is also really important.Jason Raeburn, a partner in the litigation department of law firm Paul Hastings, says the key point is for CIOs and the C-suite to really engage with and understand the specific use cases for GenAI within their particular industry to assess what risks, if any, arise for their organization.As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance, says Raeburn in an email interview. Policies really need to be tailored to the needs of the organization, but at a minimum, they should include a GenAI in the workplace policy so there is clarity as to what the employer considers to be appropriate and inappropriate use for business purposes.Zara Watson Young, co-founder and CEO at the Watson & Young IP law firm, says the board, CEO and C-suite should regularly discuss how GenAI affects their IP strategies.These conversations should identify potential gaps in current policies, keep everyone informed about shifts in the legal landscape and ensure that the team understands the nuances of AIs impact on copyright and trademark laws, says Watson in an email interview. Equally important are discussions with counsel, focusing on developing robust IP policies for AI usage, ensuring compliance and implementing enforcement strategies to protect the companys rights.In the absence of concrete regulations and standards of practice, companies should develop their own policies based on how they use generative AI. According to Poler Legals Poler these policies should be split into two types, so they address both sides of the generative AI process: data gathering and training and output generation.Policies for data gathering and training need to be clear on how and what data is used, whether any third-party involvement is part of that process, the vetting and monitoring process for the data, how the data is stored, and how the company is protecting and securing that data, says Poler. The biggest concerns are privacy, security and infringement. These policies need to be up to date with all regulations, especially for international usage.Companies using their own datasets and models can better vet, monitor, and control data and models. However, the companies using third-party datasets and models need to do their due diligence on them and ensure transparency, security, legal compliance, and ethical usage, such as removing bias.Policies for output generation should be centered around monitoring. Companies should develop policies that contain how the monitoring is done for privacy and intellectual property concerns, says Poler. These policies need to contain instructions and procedures on how outputs are before they are ultimately used with checklists of important criteria to detect confidential information and protect it intellectual property.Banner Witcoffs Sigmon says companies should establish policies that strike a careful balance between the usefulness of AI enabled tools and the liability risks they pose. For instance, employees should be strongly discouraged from using any external AI tools that have not been fully tested and approved by their employer.Such tools compose both the risk of copyright infringement if, for example, they generate infringing content and a risk of confidential information loss such as if the employee discloses confidential information the AI and that information is stored, used for future training, or the like, says Sigmon. In turn, this means that if a company decides to use an AI tool, it should understand that tool deeply: how it operates, what data set were used to train it, who assumes liability if copyright infringement occurs and/or if sensitive data is exfiltrated, and [more].Bottom LineThe wildfire adoption and use of GenAI has outpaced sound risk management. Organizational leaders need to work cohesively to ensure that GenAI usage is in the companys best interests and that the potential risks and liabilities are understood and managed accordingly.Check to see whether your companys policies are up to date. If not, the time to start talking internally and with counsel is now.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 54 Ansichten
  • WWW.INFORMATIONWEEK.COM
    The Essential Tools Every AI Developer Needs
    John Edwards, Technology Journalist & AuthorNovember 1, 20245 Min ReadAkarapong Chairean via Alamy Stock PhotoAI development, like the technology itself, is still in its early stages. This means that many development tools are also emerging and advancing.Over the past several months, we've seen the rise of a new technology stack when it comes to AI application development, as the focus shifts from building machine learning models to building AI solutions, says Maryam Ashoori, director of product management for watsonx.ai at IBM, in an email interview. "To navigate exponential leaps in AI, developers must translate groundbreaking AI research into real-world applications that benefit everyone."Essential ToolsCurrent AI tools provide a comprehensive ecosystem supporting every stage of the AI development process, says Savinay Berry, CTO and head of strategy and technology at cloud communications services provider Vonage, in an online discussion. A wide array of tools helps developers create and test code, manage large datasets, and build, train and deploy models, allowing users to work efficiently and effectively, he notes. "They also facilitate the interpretation of complex data, ensure scalability through cloud platforms, and offer robust management of data pipelines and experiments, which are crucial for the continuous improvement and success of AI projects."Related:Within the current AI landscape, there are a variety of essential development tools, Ashoori states, including integrated development environments (IDEs) for efficient coding, version control tools for collaboration, data management offerings for quality input, cloud platforms for scalability and access to GPUs, and collaboration tools for team synergy. "Each is critical for streamlined, scalable AI development," she says.Every AI developer should have a minimum set of tools that cover various aspects of development, advises Utkarsh Contractor, vice president of AI at generative AI solutions firm Aisera and a generative AI senior research fellow at Stanford University. "These include an IDE such as VS Code or Jupyter Notebook, a version control system like GitHub, and open-source frameworks like PyTorch and TensorFlow for building models." He believes that data manipulation and visualization tools, like Pandas, Matplotlib, and Apache Spark, are essential, along with monitoring tools, such as Grafana. Contractor adds that access to compute resources and GPUs, either locally or in the cloud, are also critical for quality AI development.GitHub Copilot, an AI-assisted programming tool, isn't essential but can enhance productivity, Contractor says. "Similarly, MLflow excels in tracking experiments and sharing models, while tools like Labelbox simplify dataset labeling." Both are valuable additions, but not required, he observes.Related:When it comes to cloud services, Berry notes that tools such as AWS SageMaker, Google Cloud AI Platform, Google Colab, Google Playground, and Azure Machine Learning offer fully managed environments for building, training, and deploying machine learning models. "These platforms provide a range of automated tools like AutoML, which can help developers quickly create and tune models without deep expertise in every aspect of machine learning," he says. "They are particularly valuable for developers who want to focus more on model development and less on infrastructure management." Berry adds that these tools add value by streamlining processes, enhancing collaboration, and improving the overall user experience, even if they aren't strictly required for all AI projects.When it comes to scaling AI development at the enterprise level, organizations should look beyond disparate development tools to broader platforms that support the rapid adoption of specific AI use-cases from data through deployment, Ashoori advises. "These platforms can provide an intuitive and collaborative development experience, automation capabilities, and pre-built patterns that support developer frameworks and integrations with the broader IT stack."Related:Fading AwayAs AI evolves and new tools arrive, several older offerings are falling out of favor. "Some libraries, such as NLTK and CoreNLP for natural language processing, are losing relevance and becoming obsolete due to innovations like generative AI and transformer models," Contractor says."Once the go-to for data analysis, Pandas and NumPy, two popular Python libraries for data analysis and scientific computing, are losing adherents," observes Yaroslav Kologryvov, co-founder of AI-powered business automation platform PLATMA via email. "Theano, replaced by TensorFlow and PyTorch, has suffered a similar fate."As AI development continues to advance rapidly, staying updated with the latest tools and frameworks is crucial for maintaining a competitive edge, Berry says. "While some older tools may still serve specific purposes, the shift toward more powerful, efficient solutions is clear," he states. "Embracing innovations ensures that AI developers can tackle increasingly complex challenges with agility and precision."Adaptability and StreamliningIn the rapidly evolving AI universe, developers must maintain a high degree of adaptability, continuously reassessing and optimizing their toolsets, Contractor says. "As innovation accelerates, tools that are essential today may quickly become outdated, necessitating the adoption of new cutting-edge technologies and methodologies to enhance workflows and maximize project efficiency and effectiveness."To simplify and streamline the AI development experience, organizations should seek platforms that provide developers with optionality, customization and configurability at every layer of the AI stack, Ashoori concludes.About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Kommentare 0 Anteile 62 Ansichten
  • WWW.INFORMATIONWEEK.COM
    2024 Halloween Frights in Tech
    This is not about your childhood-hunkered-by-the-campfire scary stories. These are the getcha, gotcha, gut-clenching tales of a truly terrifying time.
    0 Kommentare 0 Anteile 71 Ansichten
  • WWW.INFORMATIONWEEK.COM
    How to Keep IT Up and Running During a Disaster
    The United States experienced 28 disasters, including storms, flooding, tornadoes and a wildfire, that cost more than a billion dollars each in 2023, according to the National Oceanic and Atmospheric Administration (NOAA). And those were only the most expensive, weather-related events in one country. Around the world, natural disasters, including non-weather-related phenomena such as earthquakes and tsunamis, wreak havoc on human life and on infrastructure -- including the IT that keeps life in the digital age running smoothly.While the devastation caused by massive events understandably captures headlines, even relatively minor natural disasters such as large storms can affect IT operations. A 2024 report found that 52% of data center outages were the result of power failures. In the last decade, 83% of major power outages were weather-related. Even relatively minor storms can take out power lines.Fourteen percent of respondents surveyed for InformationWeeks 2024 Cyber Resilience Strategy Report said that their network accessibility had been disrupted by severe weather or a natural disaster. Sixteen percent ranked natural disasters as the single most significant event they had experienced.Some businesses affected by natural disasters dont survive in the first place: according to the Federal Emergency Management Agency, 43% of businesses never reopen and almost a third go out of business within two years. Loss of IT accessibility for nine days or more typically results in bankruptcy within one year.Related:Only 23% of respondents to a survey on the effects of Hurricane Sandy in 2012 were prepared for the storm. Despite the increasing prevalence of weather-related events because of climate change, the US Chamber of Commerce Foundation found that only 26% of small businesses have a disaster plan in place as of this year, suggesting that few have planned for how their IT will be impacted.Here, InformationWeek investigates strategies for keeping IT operational when disaster inevitably strikes, with insights from data center operator DataBanks senior director of sustainability, Jenny Gerson, and industrial software company IFSs chief technology officer for North America, Kevin Miller. Preventing Damage to InfrastructureDepending on the location of an IT facility and the natural disasters common to the region, any number of steps may need to be taken to prevent damage to essential physical IT components.We take into account all kinds of natural disasters when were looking at where to site a data center -- we try to site it in the safest place we can, Gerson says.Related:Jenny Gerson, DataBankIn earthquake-prone regions, buildings need to be able to withstand temblors -- additional reinforcements may be needed to prevent servers and wiring from being disrupted. Operators in areas prone to severe storms and hurricanes may need to both stormproof their buildings and ensure that essential equipment is located above ground level or in waterproof enclosures to avoid potential flood damage. Flood barriers may be advisable in some areas. Attention to potential mold damage after flooding may be necessary, as mold may create dangerous conditions for employees. And fire suppression systems may be able to mitigate damage before equipment is completely destroyed.Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance.But, Miller notes, IoT devices can sometimes fail. Were seeing customers who are starting to rely more on options like Starlink, he says. Theres no physical infrastructure other than a mini satellite dish thats providing that connectivity -- but [it offers the] ability for them to get data, feed it back, analyze it, and then make predictive assessments on what they should be doing.Related:Onsite generators, including sustainable onsite power plants using solar or wind, and microgrids can keep operations running even if access to the main grid is cut off. And redundancy in cooling is crucial for data centers as well.Should the utility go down, we have a seamless way to get to our generator backup so there are no blips in power, Gerson claims. We always have backup cooling systems.Creating BackupsGeodiversity can make or break IT operations during a natural disaster. While steps can be taken to protect operations, they may not always be sufficient to prevent interruption. If a data center or other IT operation is taken offline, the ability to switch over to a location in an unaffected area or to more dispersed, cloud-based operations, can be relatively seamless if proper planning is in place.This type of redundancy requires careful implementation of regular backups -- cloud technology makes this relatively efficient but hard backups may be useful as well. Setting shorter recovery point objectives, while potentially more expensive in the short-term, will likely make it easier to get things back up and running if an operation is taken offline by a disaster.IoT devices may be helpful in recovering data that is not fully backed up. Many of these devices store data on their own before transmitting portions of it to the servers to which they are connected. In the case of a disaster, that stored information may be helpful in data restoration processes.Regulatory ComplianceIn disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster.There are certain aspects of [disaster response] that need to be captured, Miller says. A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.Being aware of deadlines for compliance reporting and being in contact with regulators if they might be missed can save money on potential fines and penalties. And notifying emergency response agencies may result in prioritization of assistance given the economic imperatives of IT continuity.Disaster PlansHaving a disaster plan in place will make for smoother operations when a disaster inevitably occurs. Yet, according to InformationWeek's report, only 24% of organizations accounted for natural disasters in their response plans. Given the physical and operational risk posed by these events, it is clear that organizations need to take a harder look at how they may be affected.While of course some of these disasters are regional -- not everyone will be affected by hurricanes or earthquakes -- others are nearly universal. Severe storms may strike anywhere. Keeping on top of potential risks before they occur, using AI and publicly available information, can make for smoother responses when a storm rolls in or a fire breaks out.Kevin Miller, IFSCreating clear channels of communication between leadership is among the most important planning steps. Establishing who is in charge and directing employees on how to modify their workflows ensures that operations will remain efficient. How exactly those workflows will proceed needs to be mapped out as well. Remote work infrastructure, with attendant security protocols, must be established well in advance.Miller notes that planning can also facilitate mutual aid between utilities and effectively utilizing data from sensing networks to coordinate efficient deployment of collaborative crews who can correct on-the-ground problems more quickly than organizations might be able to on their own.Organizing a full inventory of physical devices and their functions, as well as other potential vulnerabilities that may be affected, allows for an accurate assessment of how their loss might be compensated for.We have twice-annual preventative maintenance on all of our critical systems, Gerson says. But they get checked again should there be a natural disaster heading towards a facility: making sure that automated systems are ready to go, making sure our cooling systems are good to go.And if supply chains are disrupted, preventing access to necessary materials or technology, backups can be located for them as well.Protecting data from potential exposure during times of crisis is paramount. Remote work may create certain vulnerabilities and transfer of backups from the cloud and from various locations may create additional exposure.Rehearsing how backups can be located and implemented with minimal disruption allows for a calm, measured response and minimizes panic and potential hurdles. If a transition does not go well during a rehearsal, the problem can be addressed ahead of time.
    0 Kommentare 0 Anteile 62 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Troll Disrupts Conference on Russian Disinformation With Zoom-Bombing
    The event, Russian Disinformation: Tactics, Influence, and Threats to National Security, showcased various methods the nation-state allegedly uses to disrupt western societies.
    0 Kommentare 0 Anteile 52 Ansichten
Mehr Artikel