-
- ΑΝΑΚΆΛΥΨΕ
-
-
-
-
News and Analysis Tech Leaders Trust
Πρόσφατες ενημερώσεις
-
WWW.INFORMATIONWEEK.COMHow Can Decision Makers Trust Hallucinating AI?Max Belov, Chief Technology Officer, Coherent SolutionsNovember 25, 20244 Min ReadMopic via Alamy StockEvery breakthrough has its share of mistakes. Artificial intelligence is disrupting routine tasks and is quickly establishing itself as a very powerful personal assistant. For example, AI helps medical researchers find and evaluate available donors for cell treatments, giving patients hope where there was none -- and the list of AI uses goes on. Yet, this same technology generates misleading financial forecasts based on non-existent data or creates references to fictitious scientific articles.AI models are only as trustworthy as the data they are trained on. However, even with a solid data foundation, the results of AI predictions are not 100% accurate. The impact of their occasional hallucinations may range from causing slight user embarrassment to billions of dollars worth of financial losses and legal repercussions for organizations. The question is how organizations can look beyond hallucinations and rely on AI in decision-making when the models are partially transparent.AI Confidence Mislead Decision-MakersOver half of Fortune 500 companies note AI as a potential risk factor. They fear AI inconsistencies and potential ethical risks that might lead to negative brand publicity and financial and reputational losses.It is impossible to fix AI hallucinations with a wave of a hand. So far, hallucinations are a common challenge in AI solutions. While the explainability of traditional ML methods and neural networks is well understood by now, many researchers are working on methods to explain GenAI and LLMs. Significant advancements will come in the near future. Meanwhile, AI certainly shouldnt be dismissed because it's not entirely reliable: It has already become a must-have tool for organizations across various industries. Decision-makers should rely on human intelligence and supervision to effectively integrate AI models into business processes.Related:Black Box Trust IssuesAI models are black boxes that lack transparency and are only partially explainable. Hallucinations are common in complex language models and deep learning systems. Such systems are affected since they hinge on patterns derived from vast datasets rather than on a fundamental deterministic understanding of the content.The good news is that taking an insightful look into the black boxes is possible, to a certain extent. Organizations can use specific methods to address one of the major trust issues with AI.Explaining the UnexplainableIn many business applications, especially those influencing critical decision-making, the ability to explain how an AI model reaches its conclusions is more important than achieving the highest possible models accuracy.Related:Not all AI models are black boxes. For example, decision trees or linear regressions are common in predictive analytics, financial forecasting, and business intelligence applications. These types of AI models are interpretable.For non-transparent models, SHAP (shapley additive explanations) helps explain how much each input affects an LLMs prediction. For example, users can ask an LLM to highlight key points in the input data and explain the logical chain behind the output. The answers can help improve system prompts and input data. However, SHAP has limited effectiveness for pre-trained LLMs due to their complexity, which requires different methods to explain their results. This is still a very rapidly developing field, with new emerging approaches for the interpretability of LLMs, such as using attention mechanisms to trace back how a model reaches its conclusion or using LLMs with memory functions to reduce inconsistencies over time.How Can Organizations Rely on AI Models?Organizations should carefully manage and contextualize the reliability of the models they use. Decision-makers can apply guardrails like regular audits and protocols for human oversight. They can consider creating a domain-specific knowledge base, which, for example, will be paramount for medical professionals, as their decisions often impact people's lives. They can also apply theRAGapproach (retrieval augmented generation) to mitigate associated risks. For example, a customer support chatbot can retrieve past interactions with a client, augment that data with product updates, and generate highly relevant responses to resolve a query.Related:Generative AI works best by augmenting human decision-making rather than entirely replacing it. It is important to keep humans in the loop, as they are competent to monitor a models accuracy and ethical compliance. As a rule of thumb, implement GenAI solutions that provide insights while putting human employees in charge of making the final decisions. They can correct and refine the outputs before an AI-driven error grows into a problem.AI models should be dynamic. Feedback loops where humans report issues and introduce changes play a key role in maintaining and enhancing the accuracy and reliability of AI. The next step in aligning AI with organizational processes is in fostering collaboration between data scientists, domain experts, and leaders.Lastly, before investing in GenAI, organizations should conduct a maturity assessment to make sure they have the necessary data infrastructure and robust governance policies in place. They need these to enhance the quality and accessibility of data used to train AI models.AI has great potential to enhance decision-making, but organizations must acknowledge the risks of hallucinations. When they implement consistent measures addressing this issue, they build trust in AI and maximize the benefits of AI solutions.About the AuthorMax BelovChief Technology Officer, Coherent SolutionsMax Belov joined Coherent Solutions in 1998 and assumed the role of CTO two years later. He is a seasoned software architect with deep expertise in designing and implementing distributed systems, cybersecurity, cloud technology, and AI. He also leads Coherents R&D Lab, focusing on IoT, blockchain, and AI innovations. His commentary and bylines appeared in CIO, Silicon UK Tech News, Business Reporter, and TechRadar Pro.See more from Max BelovNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 27 ViewsΠαρακαλούμε συνδέσου στην Κοινότητά μας για να δηλώσεις τι σου αρέσει, να σχολιάσεις και να μοιραστείς με τους φίλους σου!
-
WWW.INFORMATIONWEEK.COMWhat Does Enterprise-Wide Cybersecurity Culture Look Like?An enterprises culture is defined by a lot of different things: shared organizational values, how leaders behave, the way teams interact. A companys culture can make or break its business. Increasingly, cybersecurity is a risk that enterprise culture cannot ignore. Phishing scams. Zero-day vulnerabilities. Ransomware. Threat actors can levy various tools in their arsenal at anyone in an organization, from executives to members of the help desk.InformationWeek spoke to security leaders from three different companies about how they approach building a security-first culture across their organizations and what that can look like for different companies.Recognizing ObstaclesCulture is a complex concept, not easily built and maintained. What are some of the biggest obstacles cybersecurity leaders face when establishing security as a core cultural value?First of all, enterprises have a lot of priorities: driving revenue, marketing products and services, supporting customers and employees, and, of course, security. While each priority plays an important role in sustaining a business, they may compete with one another for talent, time, and budget.How do you get the organization to put security on par with increasing EBITDA or trying to maximize your revenue? asks John Cannava, CIO atPing Identity, an identity management and governance company.Related:Thats a tough question to answer, especially when enterprise teams view security as a stumbling block rather than a business enabler. Often security protocols, and with good reason, force people to slow down.As soon as employees think that it's an obstacle to overcome, they may look at creative ways to bypass that security control, Monica Landen, senior vice president and CISO at Diligent, a board and governance software company, says.Cybersecurity cannot be the sole responsibility of security and IT teams, but it is the responsibility of these team leaders to demonstrate its value to everyone in an organization.There is continuous need to not just come up with the right control set but also to figure out what are the best ways to scale those controls across such a heterogenous, large landscape, says Sebastian Lange, CSO at software and technology company SAP.Identifying Security ChampionsIdentifying the right security controls, scaling them across an organization, and threading that security-first mindset throughout an entire organization requires security champions. Oftentimes, the CISO and CIO wear that mantle, but the person or people who fill that role will vary depending on the size, structure, and maturity of an organization. At SAP, Lange and Marielle Ehrmann, the companys global security compliance and risk officer, co-lead global security and cloud compliance.Related:SAP has more than 100,000 employees around the world. Each line of business in SAP often [has its] own architectural uniqueness, sometimes even their own execution culture. How do you fit around that? asks Lange.The company has business information security officers for each line of business. They do the line of business-specific security implementation. So, within that model, we are spreading our security and compliance strategy into each and every line of business, Ehrmann explains.SAP also identifies employees throughout the business as security champions, people who teammates can turn to with security questions related to their everyday work. There are quite a few embedded in all of the different areas of the business to help further the availability of people with expertise but also context [and] knowledge of the day-to-day work [of] employees, says Lange.At Ping Identity, the head of product plays a big role in championing security initiatives. We've taken the security team and embedded it within our engineering organization so that it's not a high-friction interaction between those organizations, says Cannava.They're part of the same team who's delivering a solution that has security as part of its core value. Related:Whoever leads security efforts should be accessible to everyone in the company, from the board and C-suite on down. [Make] sure that the cybersecurity leader is visible and approachable and really sets clear organizational priorities across the company in easy-to-understand terms, says Landen.Securing Buy-InWhoever is championing enterprise-wide security needs to secure buy-in from everyone within an organization. At the top, that means getting the C-suite and board to throw their weight behind security.At the end of the day, if you don't have the CEO on board and the CEO isn't voicing the same level of prioritization, then it will be something that's viewed as a half step back from fundamental business priorities, Cannava warns.Effective communication is a big part of getting that buy-in from leadership. How can security leaders explain to their boards and fellow executives that security is an essential business enabler?Really [convert] the technology language or cyber language or jargon into how will that risk potential impact revenue or reputation or our compliance? says Landen.Tabletop exercises can be a powerful way to not just tell but show executives the value of cybersecurity. Walking through various cybersecurity incident scenarios can demonstrate the vital connection security has to operations and business outcomes. Ping Identity periodically engages multiple members of the C-suite in these exercises.Not only do you know learn what the gap is, you also learn by doing you're pulled in and engaged as a member of the C-suite, and now you're invested, he says. So, when you goback to your teams, you can share with them why this is so important.Executives can and should talk about the importance of security, but employees throughout an organization are busy with their day-to-day responsibilities. Cybersecurity can easily slip through the cracks.It requires regular communication, not a single training done as a part of onboarding and quickly forgotten. We find it really important to explain to our employees the why of security and what it means to the overall companys success or brand, says Cannava.Explaining that why can come in the form of education. For example, teams can discuss real-life cybersecurity events and their consequences, like downtime and lost revenue.Security leaders can also help their enterprises adopt various ways to make security more engaging and less like a check-the-box item to be forgotten. So, we have various excellence awards in place, but we are also making it a fun topic, like with a capture the flag competition. So, gamification factors in there, Ehrmann shares.Building a Strong, Adaptable CultureCompany culture and security strategy are not one-size-fits-all. While different approaches will work for different organizations, successful security-first cultures share some commonalities. Security initiatives need to be actionable, measurable, and governable across an enterprise in order to be effective. Using an established framework, such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework, can help security leaders build and track the success of that security-first culture.Technology and cyber threats are constantly changing, which means that cybersecurity culture must be adaptable. Today, security leaders are contending with the GenAI boom and its power to both defend and fuel nefarious cyber activity.As security practitioners, we do have to get ahead of it and ensure that we have adopted the right policies and practices within the organization, so we don't inadvertently expose sensitive data or potentially impact any privacy policies, says Landen.As security leaders work to ensure security-first culture keeps up with shifting technologies and threats, they need continuous engagement with employees. Does every employee know about their companys cybersecurity risks and their role in mitigating them? Do they know where to go to with questions and where to report anything suspicious?When it comes to reporting a security incident or what they might view as suspicious activity, make it really low barrier for participation, for them to be able to report that, Cannava suggests.A strong cybersecurity culture ties security to the overall goals of a business, and it lives in the everyday actions of the people who work there.It's rather like swimming or like riding a bike. The moment you need it, you should know how to do it. It needs to come naturally, says Ehrmann. You can't create that ad hoc. It needs time, the right leadership and that goes across all levels of the company from the supervisory board over to the executive board to all senior executives down to each and every employee of the company.0 Σχόλια 0 Μοιράστηκε 28 Views
-
WWW.INFORMATIONWEEK.COMBeyond Washington, DC: The State of State-Based Data Privacy LawsIn the absence of federal law, how will state-based data privacy laws due to take effect in 2025 and beyond affect business operations?0 Σχόλια 0 Μοιράστηκε 29 Views
-
WWW.INFORMATIONWEEK.COMPrioritizing Responsible AI with ISO 42001 ComplianceAmine Anoun, CTO, EvisortNovember 22, 20245 Min ReadJ.V.G. Ransika via Alamy StockArtificial intelligence is a critical tool for companies looking to keep pace in the current competitive business landscape. The potential of AI promises great things -- greater efficiency among the workforce, customized customer experiences, better informed decision making for C-suite executives -- but it also comes with great risk, being just as useful to bad actors as it is to those with good intentions.To combat nefarious use and promote transparency around the new technology, the International Organization for Standardization (ISO) recently released ISO/IEC 42001. The new standard guides the ethical and responsible development and deployment of artificial intelligence management systems -- effectively giving organizations a vehicle to demonstrate that their approach to AI is ethical and secure.In a world where AI is rapidly reshaping industries, having a structured approach like the one outlined in ISO 42001 ensures that businesses are harnessing AI's power while maintaining ethical and transparent practices. Having recently gone through the certification process, heres what other companies considering taking this step should know:What Is ISO 42001 and Why Does It Matter?ISO 42001 is a groundbreaking international standard designed to establish a structured roadmap for the responsible development and usage of AI. This standard addresses critical challenges such as ethics, transparency, continual learning, and adaptation, ensuring that AI technologies are harnessed ethically and effectively.Related:The standard is also intentionally structured to align with other well-known management system standards, such as ISO 27001 and ISO 27701, to enhance existing security, privacy, and quality programs. For companies that touch AI, its of the utmost importance to be on top of the most rigorous AI frameworks and to implement strict guardrails to protect customers from malicious intent. It also gives organizations a foundation to comply with upcoming regulations, like the EU AI Act and related legislation in Colorado.The Journey to ISO 42001 ComplianceAchieving compliance with ISO 42001 required our organization to take a risk-based approach to the establishment, implementation, maintenance, and continuous improvement of an AIMS. This approach involved several phases, including:Defining the context in which our AI systems operate.Identifying relevant external and internal stakeholders.Understanding the expectations and requirements of the framework.Related:Additionally, building out a comprehensive, ISO 42001-certified AIMS required us to standardize the fairness, accessibility, safety, and various impacts of our AI systems. The standard looks at an organization's policies related to AI, the internal organization of roles and responsibilities for working with AI, resources for AI systems such as data, impact analysis of AI systems on individuals, groups, and society, the AI system life cycle, data management, information dissemination to interested parties (like external reporting), the use of AI systems, and third-party relationships.Undergoing this certification process took approximately six months and involved us working closely with our auditing partner. Upon completion of our assessment, we received certification of compliance with ISO 42001 standards to serve as an indicator of our prioritization of responsible and secure AI to all stakeholders. Moving forward, we must sustain the practices mandated by the framework and undergo future routine assessments to continuously ensure we maintain compliance.The Impact of ISO 42001 Compliance on Our AI StrategyCompliance with ISO 42001 is not just about meeting a set of standards; it fundamentally impacts how we utilize AI moving forward. With many companies building out their own AI capabilities, proving to customers and stakeholders that they can trust our systems is crucial -- and ultimately becomes a competitive differentiator.Related:ISO 42001 addresses these concerns through comprehensive requirements, providing a roadmap to satisfying security and safety concerns about our AI. Getting ISO 42001 certified has allowed us to do the following:Validate our AI management: ISO 42001 certification provides independent corroboration that we manage our AI systems ethically and responsibly.Enhance trust with stakeholders: The certification demonstrates our commitment to responsible AI practices and ethical, transparent, and accountable AI development and usage.Improve risk management: The certification helps us identify and mitigate risks associated with AI, ensuring potential ethical, security, and compliance issues are proactively addressed.Gain a competitive edge: As ISO 42001 was published recently, becoming one of the first globally to certify our AIMS gives us an edge in the market, signaling to clients, partners, and regulators that we are at the forefront of responsible AI use.The Importance of Working With an Accredited BodyAchieving ISO 42001 certification is a significant milestone, but its essential to work with an accredited body to ensure the certifications credibility. In our certification process, we prioritized working with Schellman, an ANAB-accredited auditing certification body, as our partner in this journey. Schellmans accreditation gave us assurance that they are properly equipped to verify our compliance with the ISO 42001 framework, adding an extra layer of validation to our certification while guiding us through the process.While compliance does not equate to absolute security, it positions an organization to mitigate risks effectively and demonstrate to customers that their security is a top priority. By adhering to the rigorous standards set out in ISO 42001, we are committed to responsible AI practices that not only meet but exceed stakeholder expectations, ensuring the safe and ethical use of AI technologies.About the AuthorAmine AnounCTO, EvisortAmine Anoun is the Founder and Chief Technology Officer of Evisort. Prior to Evisort, Anoun served as a data scientist at Uber. Anoun is a graduate of the Massachusetts Institute of Technology and CentaleSupelec. He was a member of the Forbes 30 Under 30 list and was also recognized as one of the Top 100 MIT Alumni in Technology in 2021.See more from Amine AnounNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 31 Views
-
WWW.INFORMATIONWEEK.COMHow AI is Revolutionizing PhotographyJohn Edwards, Technology Journalist & AuthorNovember 22, 20245 Min ReadAlessandro Grandini via Alamy Stock PhotoAIrevolutionizes just about everything. Photography is no exception.AI is a powerful tool, says Conor Gay, vice president of business operations at MarathonFoto, a firm specializing in marathon race photography. When used appropriately, it can enhance great photography and create incredible designs, he explains in an email interview. "When used carelessly, it can cause confusion, misinformation, or just plain ruin a photo."AI helps photographers realize a creative vision, observes John McNeil, founder and CEO of John McNeil Studio, a San Francisco-area based creative firm. "It's an incredibly powerful tool, helping even less-than-professional photographers create more professional images," he notes in an online interview. "Features such as exposure correction, auto enhance, and auto skin tone, allow just about anyone to take great pictures."Johnny Wolf, founder and lead photographer at Johnny Wolf Studio, a New York-based corporate photography studio, says that AI allows him to explore complex concepts in pre-production and create realistic mockups for client approval, all without even having to touch a camera. "It gives me the ability to quickly test and iterate on ideas without having to invest time and resources," he explains via email. "This results in a more focused discovery phase with clients and leads to fewer revisions during the editing process."Related:Efficiency and QualityAI tools enable greater efficiency and higher quality when capturing images, automatically detecting subjects, optimizing an image at the moment it's taken, says Chris Zacharias, founder and CEO of visual image studio Imgix. AI tools can identify subjects and objects within an image to allow greater precision in editing," he notes in an email interview. "We can remove unwanted elements or introduce new ones into a photograph in pursuit of a creative vision."Wolf says that AI's greatest impact has been automating the mundane. "Basic tasks, like whitening a subject's teeth, or cloning-out distracting background elements, used to involve a time-consuming masking process, which can now be done with one click," he explains. "With AI handling the drudgery of post-production, I'm free to dedicate more time and energy into creative exploration, improving my craft and delivering a more personalized and impactful final product."AI has allowed us to identify images faster and more accurately than ever before, Gay says. "In the past two years, we've been able to get more images into runners' galleries, typically within 24 hours of their finish," he notes. "AI has also allowed us to capture more unique shots and angles."Related:Gay adds that AI can also capture relevant photo data that can be used by race partners and sponsors. "We're now able to identify sponsor-branding that appears in our photos, and even capture data around apparel and footwear." The technology is also used to enhance images. "We see different weather and lighting conditions throughout the day," he notes. "AI allows us to enhance these images to their highest quality."AI's power, control, flexibility, and possibilities are absolutely incredible, McNeil states. "Photoshop was a game changer 30 years ago, and in less than three years, AI makes things like histograms and layers seem positively quaint."The DownsideAI's ethical implications are significant, and will require discussion, consideration, and action by a wide range of stakeholders and organizations, Zacharias says. "There's much to consider, and the impacts are already being felt."Maintaining authenticity is a top concern, Gay says. "Especially in our industry, runners work tirelessly to complete their races," he notes. "The idea of someone being able to create a fake finish line moment with AI discredits the hard work each athlete puts into their race." Gay says his goal is to document runners' journeys on race day and to be as accurate as possible.Related:McNeil worries that there may now be too much reliance on AI. "The term 'well fix it in post' used to be a lazy joke people would make on set," he says. "Today, it's literally the process." Yet such an attitude can lead to images that are poorly crafted, uninventive, and looking like they were generated by AI. "Ultimately, as creative people and artists, we need to be more critical about the work we're putting into the world."While photo manipulation is nothing new, AI's ability to instantly generate photography that's indistinguishable from reality has led to a frightening inflection point, Wolf warns. "Anyone with an agenda and a web browser can now create and disseminate AI-generated propaganda as a real-time response to events," he explains. "If society can no longer trust photos as evidence of truth, we'll retreat further into our echo chambers and consume content that has been generated to reinforce our views."Looking ForwardArtists have always adapted and leveraged new tools and technologies to create novel forms of self-expression, Zacharias says. "The coming years will see a lot of discussion about what is real or authentic," he notes. "At the end of the day, AI is and will continue to be a tool, and it is we humans who will define what the soul of the medium is."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 42 Views
-
WWW.INFORMATIONWEEK.COMInnovation Relies on Safeguarding AI Technology to Mitigate its RisksBrandon Taylor, Digital Editorial Program ManagerNovember 22, 20245 Min ViewAs artificial intelligence (AI) continues to advance and be adopted at a blistering pace, there are many ways AI systems can be vulnerable to attacks. Whether being fed malicious data that enables incorrect decisions or being hacked to gain access to sensitive data and more, there are no shortage of challenges in this growing landscape.Today, it's more vital than ever to consider taking steps to ensure that generative AI models, applications, data, and infrastructure are protected.In this archived panel discussion, Sara Peters (upper left in video), InformationWeeks editor-in-chief; Anton Chuvakin (upper right), senior staff security consultant, office of the CISO, for Google Cloud; and Manoj Saxena (lower middle), CEO and executive chairman of Trustwise AI, came together to discuss the importance of applying rigorous security to AI systems.This segment was part of our live virtual event titled, State of AI in Cybersecurity: Beyond the Hype. The event was presented by InformationWeek and Dark Reading on October 30, 2024.A transcript of the video follows below. Minor edits have been made for clarity.Sara Peters: All right, so let's start here. The topic is securing AI systems, and that can mean a lot of different things. It can mean cleaning up the data quality of the model training data or finding vulnerable code in the AI models.Related:It can also mean detecting hallucinations, avoiding IP leaks through generative AI prompts, detecting cyber-attacks, or avoiding network overloads. It can be a million different things. So, when I say securing AI systems, what does that mean to you?What are the biggest security risks or threats that we need to be thinking about right now? Manoj, I'll send that to you first.Manoj Saxena: Sure, again, thanks for having me on here. Securing AI broadly, I think, means taking a proactive approach not only to the outside-in view of security, but also the inside-out view of security. Because what we're entering is this new world that I call prompt to x. Today, it's prompt to intelligence.Tomorrow, it will be prompt to action through an agent. The day after tomorrow, it will be prompt to autonomy, where you will tell an agent to take over a process. So, what we are going to see in terms of securing AI are the external vectors that are going to be coming into your data, applications and networks.They're going to get amplified because of AI. People will start using AI to create new threat vectors outside-in, but also, there will be a tremendous number of inside-out threat vectors that will be going out.Related:This could be a result of employees not knowing how to use the system properly, or the prompts may end up creating new security risks like sensitive data leakage, harmful outputs or hallucinated output. So, in this environment, securing AI would mean proactively securing outside-in threats as well as inside-out threats.Anton Chauvkin: So, to add to this, we build a lot of structure around this. So, I will try to answer without disagreeing with Manoj, but by adding some structure. Sometimes I joke that it's my 3am answer if somebody says, Anton secure AI! What do you mean by this? I'll probably go to the model that we built.Of course, that's part of our safe, secure AI framework approach. When I think about securing AI, I think about models, applications, infrastructure and data. Unfortunately, it's not an acronym, because the acronym would be MADE, and it'll be really strange.But after somebody said it's not an acronym, obviously, everybody immediately thought it's an acronym. The more serious take on this is that if I say securing AI, I think about securing the model, the applications around it, the infrastructure under it, and the data inside it.I probably won't miss anything that's within the cybersecurity domain, if I think about these four buckets. Ultimately, I've seen a lot of people who obsess about one, and all sorts of hilarious and sometimes sad results happen. So, for example, I go and say the model is the most important, and I double down on prompt injection.Related:Then, SQL injection into my application kills me. If I don't want to do it in the cloud for some reason, and I try to do it on premise, my infrastructure is let go. My model is fine, my application is great, but my infrastructure is let go. So, ultimately, these four things are where my mind goes when I think about securing AI systems.MS: Can I just add to that? I think that's a good way to look at the stack and the framework. I would add one more piece to it, which is around the notion of securing the prompts. This is prompt security and filtering, prompt defense against adversarial attacks, as well as real time prompt validation.You're going to be securing the prompt itself. Where do you think that fits in?AC: We always include it in the model, because ultimately, the prompt issues to us are AI specific issues. Nothing in the application infrastructure data is AI specific, because these exist, obviously, for non-applications. For us, when we talk about prompt, it always sits inside the M part of the model.SP: So, Google's secure AI framework is something that we can all look for and read. It's a thorough and interesting read, and I recommend to our audience to do that later. But you guys have just covered a wide variety of different things already when I asked the first question.So, if I'm a CIO or a CISO, what should I be evaluating? How do I evaluate the security of a new AI tool during the procurement phase when you have just given me all these different things to try to evaluate? Anton, why don't you start with that one?Watch the archived State of AI in Cybersecurity: Beyond the Hype live virtual event on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 41 Views
-
WWW.INFORMATIONWEEK.COMThe New Cold War: US Urged to Form Manhattan Project for AGIShane Snider, Senior Writer, InformationWeekNovember 21, 20245 Min ReadIvan Marc Sanchez via Alamy StockA bi-partisan US congressional group this week released a report urging a Manhattan Project style effort to develop AI that will be able to outthink humans before China can win the AI arms race.The US-China Economic and Security Review Commission outlined the challenges and threats facing the US as powerful AI systems continue to quickly proliferate. The group calls for the government to fund and collaborate with private tech firms to quickly develop artificial general intelligence (AGI).The Manhattan Project was the historic collaboration between government and the private sector during World War II that culminated in the development of the first atomic bombs, which the US infamously unleashed on Japan. The subsequent proliferation of nuclear weapons led to an arms race and policy of mutually assured destruction that has so far deterred wartime use, but sparked the Cold War between the United States and Russia.While the Cold War with Russia ultimately ended in 1991, the nuclear stalemate caused by the arms pileup remains.A new stalemate may be brewing as superpowers race to develop AGI, which ethicists warn could present an existential threat to humanity. Many have likened such a race to the plot of the Terminator movie, where the fictional company Cyberdyne Systems works with the US government to achieve a type of AGI that ultimately leads to a nuclear catastrophe.Related:The commissions report doesnt sugarcoat the possibilities. The United States is locked in a long-term strategic competition with China to shape the rapidly evolving global technological landscape, according to the report. The rise in emerging tech like AI could alter the character of warfare and for the country winning the race, would tip the balance of power in its favor and reap economic benefits far into the 21st century.AI Effort in China ExpandsChinas State Council in 2017 unveiled its New Artificial Intelligence Development Plan, aiming to become the global leader in AI by 2030.The US still has an advantage, with more than 9,500 AI companies compared to Chinas nearly 2,000 companies. Private investment in the US dwarfs Chinas effort, with $605 billion invested, compared to Chinas $86 billion, according to a report from the non-profit Information Technology & Innovation Foundation.But Chinas government has poured a total of $184 million into AI research, including facial recognition, natural language processing, machine learning, deep learning, neural networks, robotics, automation, computer vision, data science, and cognitive computing.Related:While four US large language models (LLMs) sat on top of performance charts in April 2024, by June, only OpenAIs GPT-4o and Claude 3.5 remained on top. The next five models were all from China-backed companies.The gap between the leading models from the US industry leaders and those developed by Chinas foremost tech giants and start-ups is quickly closing, the report says.Where the US Should FocusThe report details areas that could make the biggest impact on the AI arms race where the US currently has an advantage, including advanced semiconductors, compute and cloud, AI models, and data. But China, the report contends, is making progress by subsidizing emerging technologies.The group recommends a priority on AI defense development for national security, with contracting authority given to the executive branch. The commission urges US Congress to establish and fund the program, with the goal of winning the AGI development race.The report also recommends banning certain technologies controlled by China, including autonomous humanoid robots, and products that could impact critical infrastructure. US policy has begun to shift to recognize the importance of competition with China over these critical technologies, the report states.Related:Manoj Saxena, CEO and founder of Responsible AI Institute and InformationWeek Insight Circle member, says the power of AGI should not be underestimated as countries race toward innovation.One issue is rushing to develop AGI just to win a tech race and not understanding the unintended consequences that these AI systems could create, he says. it could create a situation where we cannot control things, because we are accelerating without understanding what the AGI win would look like.Saxena says the AGI race may result in the need for another Geneva Convention, the global war treaties and humanitarian guidance that were greatly expanded after World War II.But Saxena says a public-private collaboration may lead to better solutions. As a country, were going to get not just the best and brightest minds working on this, most of which are in the private sector, but we will also get wider perspectives on ethical issues and potential harm and unintended consequences.An AI Disaster in the Making?Small actors have limited access to the tightly controlled materials needed to make a nuclear weapon. AI, on the other hand, enjoys a relatively open and democratized environment. Ethicists worry that ease of access to powerful and potentially dangerous systems may widen the threat landscape.RAI Institutes Saxena says weaponization of AI is already occurring, and it might take a catastrophic event to push all parties to the table. I think there is going to be some massive issues around AI going rogue, around autonomous weapon attacks that go out of control somewhere Unfortunately, civilization progresses through a combination of regulations, enforcement, and disasters.But in the case of AI, regulations are far behind, he says. Enforcements are also far behind, and it's more likely than not that there will be some disasters that will make us wake up and have some type of framework to limit these things.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 59 Views
-
WWW.INFORMATIONWEEK.COMDoes the US Government Have a Cybersecurity Monoculture Problem?Carrie Pallardy, Contributing ReporterNovember 21, 20244 Min ReadSOPA Images Limited via Alamy Stock PhotoThe way Microsoft provided the US government with cybersecurity upgrades is under scrutiny. ProPublica published a report that delves into the White House Offer: a deal in which Microsoft sent consultants to install cybersecurity upgrades for free. But those free product upgrades were only covered for up to one year.Did this deal give Microsoft an unfair advantage, and what could it take to shift the federal governments reliance on the tech giants services?The White House OfferProPublica spoke to eight former Microsoft employees that played a part in the White House Offer. With their insight, the ProPublicas report details how this deal makes it difficult for users in the federal government to shift away from Microsofts products and how it helped to squeeze out competition.While the cybersecurity upgrades were initially free, government agencies need to pay come renewal time. After the installation of the products and employee training, switching to alternatives would be costly.ProPublica also reports that Microsoft salespeople recommended that federal agencies drop products from competitors to save costs.Critics raise concerns that Microsofts deal skirted antitrust laws and federal procurement laws.Why didn't you allow a Deloitte or an Accenture or somebody else to say we want free services to help us do it? Why couldn't they come in and do the same thing? If a company is willing to do something for free like that, why should it be a bias to Microsoft and not someone else that's capable as well? asks Morey Haber, chief security advisor at BeyondTrust, an identity and access security company. Related:ProPublica noted Microsofts defense of its deal and the way it worked with the federal government. Microsoft declined to comment when InformationWeek reached out.Josh Bartolomie, vice president of global threat services at email security company Cofense, points out that the scale of the federal government makes Microsoft a logical choice.The reality of it is there are no other viable platforms that offer the extensibility, scalability, manageability other than Microsoft, he tells InformationWeek.The Argument for DiversificationOverreliance on a single security vendor has its pitfalls. Generally speaking, you don't want to do a sole provider for any type of security services. You want to have checks and balances. You want to have risk mitigations. You want to have fail safes, backup plans, says Bartolomie.And there are arguments being made that Microsoft created a cybersecurity monoculture within the federal government.Related:Sen. Eric Schmitt (R-Mo.) and Sen. Ron Wyden (D-Ore.) raised concerns and called for a multi-vendor approach.DoD should embrace an alternate approach, expanding its use of open-source software and software from other vendors, that reduces risk-concentration to limit the blast area when our adversaries discover an exploitable security flaw in Microsofts, or another companys software, they wrote in a letter to John Sherman, former CIO of the Department of Defense.The government has experienced the fallout that follows exploited vulnerabilities. A Microsoft vulnerability played a role in the SolarWinds hack.Earlier this year it was disclosed that Midnight Blizzard, a Russian state-sponsored threat group,executed a password spray attack against Microsoft. Federal agency credentials were stolen in the attack, according to Cybersecurity Dive.There is proof out there that the monoculture is a problem, says Haber.PushbackMicrosofts dominance in the government space has not gone unchallenged over the years. For example, the Department of Defense pulled out of a $10 billion cloud deal with Microsoft. The contract, the Joint Enterprise Defense Infrastructure (JEDI), faced legal challenges from competitor AWS.Related:Competitors could continue to challenge Microsofts dominance in the government, but there are still questions about the cost associated with replacing those services.I think the government has provided pathways for other vendors to approach, but I think it would be difficult to displace them, says Haber.A New AdministrationCould the incoming Trump administration herald changes in the way the government works with Microsoft and other technology vendors?Each time a new administration steps in, Bartolomie points out that there is a thirst for change. Do I think that there's a potential that he [Trump] will go to Microsoft and say, Give us better deals. Give us this, give us that? That's a high possibility because other administrations have, he says. The government being one of the largest customers of the Microsoft ecosystem also gives them leverage.Trump has been vocal about his America First policy, but how that could be applied to cybersecurity services used by the government remains to be seen. Do you allow software being used from a cybersecurity or other perspective to be developed overseas? asks Haber.Haber points out that outsourced development is typical for cybersecurity companies. I'm not aware of any cybersecurity company that does exclusive US or even North America builds, he says.Any sort of government mandate requiring cybersecurity services developed solely in the US would raise challenges for Microsoft and the cybersecurity industry as a whole.While the administrations approach to cybersecurity and IT vendor relationships is not yet known, it is noteworthy that Trumps view of tech companies could be influential. Amazon pursued legal action over the $10 billion JEDI contract, claiming that Trumps dislike of company founder Jeff Bezos impacted its ability to secure the deal, The New York Times reports.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 59 Views
-
WWW.INFORMATIONWEEK.COMThe Evolution of IT Job Interviews: Preparing for Skills-Based HiringIn recent years, IT job interviews have undergone a significant transformation. The traditional model, characterized by casual face-to-face conversations and subjective evaluations, is gradually being replaced by a more structured, skills-focused approach. This shift reflects a broader change in how organizations value and assess talent, moving away from an overemphasis on degrees in favor of the candidate's actual abilities and accomplishments.Major technology companies like Google, IBM, and Comcast have signed the Tear the Paper Ceiling initiative, signaling a significant change in hiring practices across various industries. In part, these companies are reacting to ongoing IT skill gaps, which IDC predicts will be responsible for more than $5.5 trillion in losses by 2026, causing significant harm to 90% of companies. Especially in an age where online resources for obtaining technical skills are so widely available, this shift will open job opportunities for candidates who possess the capabilities to perform well but lack a degree.The Rise of Structured InterviewsAs the emphasis shifts toward skills-based hiring, the interview process itself is evolving. HR departments are increasingly adopting structured interviews, recognizing their effectiveness in predicting job performance and employee retention compared to less formal traditional approaches.Related:Effectively structured interviews employ consistency in questioning across all candidates for a given position, and these questions focus on real-world applications of skills and achieved results. Structured interviews are most predictive of job performance when conducted by a panel of trained interviewers, and, after the interview is done, each panelist evaluates the candidate using standardized evaluation criteria before they come to a consensus.Preparing for the New Interview LandscapeAs job seekers navigate this evolving landscape, it's important to prepare for skills-based interviews. Here are some key things to consider:1. Analyze the job description: The job description serves as a roadmap for interview preparation. Carefully dissect both explicit and implicit skill requirements, using this information to guide their preparation.2. Brush up on technical proficiency: With the increased likelihood of technical or skills-based questions during the interview process, be prepared to demonstrate your technical abilities that are relevant for the job in real-time. This might entail solving coding challenges or troubleshooting complex scenarios relevant to the role.Related:3. Develop a repertoire of skills stories: Prepare a collection of compelling examples that illustrate how youve applied your skills to achieve results in the past like those that will be required on the job to which you are applying. Dont forget so-called soft skills. Companies are placing an increased emphasis on these for technical positions, so make sure to highlight your experience applying skills like planning, interpersonal communication, teamwork, and problem-solving to overcome challenges or achieve a goal.4. Align with organizational values: Understanding and demonstrating alignment with a companys culture and core values has become increasingly important. Research the organization's ethos and prepare concrete examples from your professional experience that reflect these values.5. Highlight individual contributions: In skills-based interviews, its not enough to simply be part of a successful team. Interviewers want to understand your specific role and contributions to solving problems or achieving goals. When discussing accomplishments, focus on what you contributed to the teams success, the methods and approaches you employed, and the quantifiable outcomes that resulted from these efforts.Related:The Implications of Skills-Based HiringThe shift toward skills-based hiring has far-reaching implications for both job seekers and employers. For candidates, it means a greater emphasis on demonstrating tangible technical and soft skills, including the impact candidates have had, rather than relying solely on what degrees they possess. This approach can level the playing field by allowing individuals to showcase their capabilities regardless of their educational background or prior career path.For employers, skills-based hiring offers the potential for more diverse and capable teams. By focusing on competencies rather than degrees, organizations can tap into a broader talent pool and potentially identify great candidates who would have been arbitrarily rejected in the past because they didnt have a computer science or engineering degree.Embracing the Future of HiringAs we move further into the era of skills-based hiring, both IT job seekers and employers must adjust their approaches. For candidates, this means shifting focus from degrees to capabilities and preparing to demonstrate their core skills and results during the interview process. Its no longer just about having a polished resume; its about being ready to show what you can do.For organizations, the challenge lies in developing robust, fair, and effective skills-based hiring processes. This may involve rethinking job requirements, redesigning interview processes, and investing in new assessment tools.Ultimately, the evolution of job interviews reflects a broader shift in how we value and assess talent in the modern workplace. By embracing these changes and preparing accordingly, both candidates and employers can navigate the workplace more effectively, leading to better matches between individuals and roles, and ultimately, more successful and satisfying professional relationships.0 Σχόλια 0 Μοιράστηκε 61 Views
-
WWW.INFORMATIONWEEK.COMHelp Wanted: IT Hiring Trends in 2025Lisa Morgan, Freelance WriterNovember 20, 20248 Min ReadEgor Kotenko via Alamy Stock Digital transformation changed the nature of the IT/business partnership. Specifically, IT has become a driving force in reducing operating costs, making the workforce more productive and improving value streams. These shifts are also reflected in the way IT is structured."When it comes to recruiting and attracting IT talent, it is time for IT leadership to shine. Their involvement in the process needs to be much more active to find the resources that teams need right now. And more than anything, its not the shiny new roles we are struggling to hire for. Its [the] on-prem network engineer and cloud architect you need to drive business outcomes right now. Its the cybersecurity analyst, says Brittany Lutes, research director at Info-Tech Research Group in an email interview.Most organizations arent sunsetting roles, she says. Instead, theyre more focused on retaining talent and ensuring that talent has the right skills and degree of competency in those skills.It takes time to hire new resources, ensure the institutional knowledge is understood, and then get those people to continue learning new skills or applications of the skills they were hired for, says Lutes. We are better off to retain people, explore opportunities to bring in new levels or job titles with HR to satisfy development desires, and understand what the new foundational and technical skills exist that we need to grow in our organization. We have opportunities to use technology in exciting new ways to make every role from CIO to the service desk analyst more efficient and more engaging. This year I think many organizations will work to embrace that.Related:Brittany Lutes, Info-Tech Research GroupBusiness and Technology Shifts Mean IT Changes?Julia Stalnaya, CEO and founder of B2B hiring platform Unbench, believes IT hiring in 2025 is poised for significant transformation, shaped by technological advancements, evolving workforce expectations and changing business needs.The 2024 layoffs across tech industries have introduced new dynamics into the hiring process for 2025. Companies [are] adapting to leaner staffing models increasingly turn to subcontracting and flexible hiring solutions, says Stalnaya.There are several drivers behind these changes. They include technological advancements such as data-driven recruitment, AI and automation.As a result of the pandemic, remote work expanded the talent pool beyond geographical boundaries, allowing companies to hire top talent from diverse locations. This trend necessitates more flexible work arrangements and a shift in how companies handle employee engagement and collaboration.Related:Skills-based hiring will focus more on specific skills and less on traditional qualifications. This reflects the need for targeted competencies aligned with business objectives, says Stalnaya. This trend is significant for roles in rapidly evolving fields like AI, cloud engineering and cybersecurity.Some traditional IT roles will continue to decline as AI takes on more routine tasks while other roles grow. She anticipates the following:AI specialists who work across departments to deploy intelligent systems that enhance productivity and innovationCybersecurity experts, including ethical hackers, cybersecurity analysts and cloud security specialists. In addition to protecting data, they will also help ensure compliance with security standards and develop strategies to safeguard against emerging threats.Data analysts and scientists who help the business leverage insights for strategic decision-makingBlockchain developers able to build decentralized solutionsHowever, organizations must invest in training and development and embrace flexible work options if they want to attract and keep talent, which may conflict with mandatory return to office (RTO) policies.Related:The 2024 layoffs have had a profound impact on the IT hiring landscape. With increased competition for fewer roles, companies now have access to a larger talent pool. Still, they must adapt their recruitment strategies to attract top candidates who are selective about company culture, flexibility and growth opportunities, says Stalnaya. This environment also highlights the importance of subcontracting.Julia Stalnaya, UnbenchGreg Goodin, managing director of talent solutions company EXOS TALENT expects companies to start hiring to get new R&D projects off the ground and to become more competitive.Dont expect it to bounce back to pandemic or necessarily pre-pandemic levels, says Goodin. IT as a career and industry has reached a maturation point where hypergrowth will be more of an outlier and more consistent 3% to 5% year-over-year growth [the norm]. Fiscal responsibility will become the expectation. Hiring trends will most likely run in parallel with this new cycle with compensation leveling out.Whats Changing, Why and How?Interest rates are higher than they have been in recent history, which has directly influenced companies' hiring practices. Not surprisingly, AI has also had an impact, making workforces more productive and reducing costs.Meanwhile, hiring has become more data-driven, enabling organizations to better understand what full-time and contingent labor they need.During the pandemic, companies continued to hire, even if they didnt have a plan for what the new talent would be doing, according to Goodin.This led to a hoarding of employees and spending countless unnecessary dollars to have people essentially doing nothing, says Goodin. This was one of many reasons companies started to reset their workforce with mass layoffs. Expect more thoughtful, data-driven hiring practices to make sure an ROI is being realized for each employee [hired].The IT talent shortage persists, so universities and bootcamps have been attempting to churn out talent thats aligned with market needs. Companies have also had more options, such as hiring internationally, including H-1B visas.Technology moves at a rapid pace, so it is important to maintain an open mind to new ways of solving problems, while not jumping the gun on a passing fad, says Goodin. Continue to invest in your existing workforce and upskill them, when possible. This will lead to better employee engagement [and] decreased costs associated with hiring and training up new talent into your organization.Soft-skills such as communication, character, and emotional quotient will all be that much more coveted in a world utilizing AI and automation to supplement human-beings, he says.IT and the Business IT has always supported the business, but its role is now more of a partnership and a thought leader when it comes to succeeding in an increasingly tech-fueled business environment.By 2025, I believe IT hiring will reflect a new paradigm as the line between IT and other business functions continues to blur, driven by AIs growing role in daily operations. Instead of being confined to back office support, IT will become a foundational aspect of strategic business operations, blending into departments like marketing, finance, and HR. This blur will likely accelerate next year, with roles and responsibilities traditionally managed by IT -- like data security, process automation and analytics -- becoming collaborative efforts with other departments, says Etoulia Salas-Burnett, director of the Center for Digital Business at Howard University. In an email interview This shift demands IT professionals who can bridge technical expertise with business strategy, making the boundary between IT and other business functions increasingly indistinct.In 2025, she believes several newer roles will become more common, including AI integration specialists, AI ethics and compliance officer, digital transformation strategist and automation success managers. Waning titles include help desk technician and network administrator, she says.Stephen Thompson, former VP of Talent at Docusign says the expansion of cloud services and serverless architectures has driven costs up, absorbing a growing portion of IT budgets. In some cases, server expenses rival the total cost of all employees at certain companies.Enterprise organizations are actively seeking integrations with platforms like Salesforce, ServiceNow, and SAP. The serverless shift and the continuous need for integration engineers have required IT departments to evolve, becoming stronger engineering partners and application developers for critical in-house systems in sales, marketing, and HR, says Thompson in an email interview. As a result, 2025 may resemble the 2012 to 2015 period, with new technologies promising growth, and a high demand for scalable engineering expertise. Companies will seek software engineers who not only maintain but also optimize system performance, ensuring a significant return on investment. These professionals turn the seemingly impossible into reality, saving IT departments millions in the process.Green Tech Will Become More PopularFrom smaller AI models to biodegradable and recycled packaging, tech is necessarily becoming greener.We are already seeing many companies review their carbon footprint and prioritize sustainability projects, in response to climate change [and] customer and client demand. CIOs and other tech leaders will likely face more pressure to prove their sustainability and green plans within their IT projects, says Matt Collingwood, founder & managing director at VIQU IT Recruitment. This may include legacy systems needing to be phased out, tracking energy consumption across the business and supply chain, and more. In turn, this will create an increasing demand for IT roles within infrastructure, systems engineering and development.In the meantime, organizations should be mindful about algorithmic and human bias in hiring.Organizations need to make sure that they are hiring inclusively, says Collingwood. This means anonymizing CVs to reduce chances of unconscious bias, as well as putting job adverts through a gender decoder to ensure the business is not inadvertently putting off great female tech professionals.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 59 Views
-
WWW.INFORMATIONWEEK.COMAI and the War Against Plastic WasteCarrie Pallardy, Contributing ReporterNovember 19, 202410 Min ReadPollution floating in river, Mumbai, Indiapaul kennedyvia Alamy Stock PhotoPlastic pollution is easy to visualize given that many rivers are choked with such waste and the oceans are littered with it. The Great Pacific Garbage Patch, a massive collection of plastic and other debris, is an infamous result of plastics proliferation. Even if you dont live near a body of water to see the problem firsthand, youre unlikely to walk far without seeing some piece of plastic crushed underfoot. But untangling this problem is anything but easy.Enter artificial intelligence, which is being applied to many complex problems that include plastics pollution. InformationWeek spoke to research scientists and startup founders about why plastics waste is such a complicated challenge and how they use AI in their work.The Plastics ProblemPlastic is ubiquitous today as food packaging, clothing, medical devices, cars, and so much more rely on this material. Since 1950, nearly 10 billion metric tons of plastic has been produced, and over half of that was just in the last 20 years. So, it's been this extremely prolific growth in production and use. It's partially due to just the absolute versatility of plastic, Chase Brewster, project scientist at Benioff Ocean Science Laboratory, a center for marine conservation at the University of California, Santa Barbara, says.Related:Plastic isnt biodegradable and recycling is imperfect. As more plastic is produced and more of it is wasted, much of that waste ends up back in the environment, polluting land and water as it breaks down into microplastics and nanoplastics.Even when plastic products end up at waste management facilities, processing them is not simple. A lot of people think of plastic as just plastic, Bradley Sutliff, a former National Institute of Standards and Technology (NIST) researcher, says. In reality, there are many different complex polymers that fall under the plastics umbrella. Recycle and reuse isnt just a matter of sorting; its a chemistry problem, too. Not every type of plastic can be mixed and processed into a recycled material.Plastic is undeniably convenient as a low-cost material used almost everywhere. It takes major shifts in behavior to reduce its consumption, a change that is not always feasible.Virgin plastic is cheaper than recycled plastic, which means companies are more likely to use the former. In turn, consumers are faced with the same economic choice, if they even have one.There is no one single answer to solving this environmental crisis. Plastic pollution is an economic, technical, educational, and behavioral problem, Joel Tasche, co-CEO and cofounder of CleanHub, a company focused on collecting plastic waste, says in an email interview.Related:So, how can AI arm organizations, policymakers, and people with the information and solutions to combat plastic pollution?AI and Quantifying Plastic WasteThe problem of plastic waste is not new, but the sheer volume makes it difficult to gather the granular data necessary to truly understand the challenge and develop actionable solutions.If you look at the body of research on plastic pollution, especially in the marine environment, there is a large gap in terms of actually in situ collected data, says Brewster.The Benioff Ocean Science Laboratory is working to change that through the Clean Currents Coalition, which focuses on removing plastic waste from rivers before it has the chance to enter the ocean. The Coalition is partnered with local organizations in nine different countries, representing a diverse group of river systems, to remove and analyze plastic pollution.We started looking into what artificial intelligence can do to help us to collect that more fine data that can help drive our upstream action to reduce plastic production and plastic leaking into the environment in the first place, says Brewster.Related:The project is developing a machine learning model with hardware and software components. A web cam is positioned above the conveyor belts of large trash wheels used to collect plastic waste in rivers. Those cameras count and categorize trash as it is pulled from the river.This system automatically [sends] that to the cloud, to a data set, visualizing that on a dashboard that can actively tell us what types of trash are coming out of the river and at what rate, Brewster explains. We have this huge data set from all over the world, collected synchronously over three years during the same time period, very diverse cultures, communities, river sizes, river geomorphologies.That data can be leveraged to gain more insight into what kinds of plastic end up in rivers, which flow to our oceans, and to inform targeted strategies for prevention and cleanup.AI and Waste ManagementVery little plastic is actually recycled; just 5% with some being combusted and the majority ends up in landfills. Waste management plants face the challenge of sorting through a massive influx of material, some recyclable and some not. And, of course, plastic is not one uniform group that can easily be processed into reusable material.AI and imaging equipment are being put to work in waste management facilities to tackle the complex job of sorting much more efficiently.During Sutliffs time with NIST, a US government agency focused on industrial competitiveness, he worked with a team to explore how AI could make recycling less expensive.Waste management facilities can use near-visible infrared light (NIR) to visualize and sort plastics. Sutliff and his team looked to improve this approach with machine learning.Our thought was that the computer might be a lot better at distinguishing which plastic is which if you teach it, he says. You can get a pretty good prediction of things like density and crystallinity by using near infrared light if you train your models correctly.The results of that work show promise, and Sutliff released the code to NISTs GitHub page. More accurate sorting can help waste management facilities monetize more recyclable materials, rather than incinerate them, send them to landfills, or potentially leak them back into the environment.Recyclers are based off of sorting plastics and then selling them to companies that will use them. And obviously, the company buying them wants to know exactly what they're getting. So, the better the recyclers can sort it, the more profitable it is, Sutliff says.There are other organizations working with waste collectors to improve sorting and identification. CleanHub, for example, developed a track-and-trace process. Waste collectors take photos and upload them to its AI-powered app.The app creates an audit trail, and machine learning predicts the composition and weight of the collected bags of trash. We focus on collecting both recyclable and non-recyclable plastics, directing recyclables back into the economy and converting non-recyclables into alternative fuels through co-processing, which minimizes environmental impact compared to traditional incineration, explains Tasche.Greyparrot is an AI waste analytics company that started out by partnering with about a dozen recycling plants around the world, gathering a global data set to power its platform. Today, that platform provides facilities with insights into more than 89 different waste categories. Greyparrots analyzers sit above the conveyor belts in waste management facilities, capturing images and sharing AI-powered insights. The latest generation of these analyzers is made of recyclable materials.If a given plant processes 10 tons or 15 tons of waste per day that accumulates to around like 20 million objects. We actually are looking at individually all those 20 million objects moving at two to three to four meters a second, very high-speed in real time, says Ambarish Mitra, co-founder of Greyparrot. We are not only doing classification of the objects, which goes through a waste flow, we are [also] doing financial value extraction.The more capable waste management facilities are of sorting and monetizing the plastic that flows into their operations, the more competitive the market for recycled materials can become.The entire waste and recycling industry is in constant competition with the virgin material market. Everything that either lowers cost or increases the quality of the output product is a step towards a circular economy, says Tasche.AI and a Policy ApproachPlastic waste is a problem with global stakes, and policymakers are paying attention. In 2022, the United Nations announced plans to create an international legally binding agreement to end plastic pollution. The treaty is currently going through negotiations, with another session slated to begin in November.Scientists at the Benioff Ocean Science Laboratory and Eric and Wendy Schmidt Center for Data Science & Environment at UC Berkeley developed the Global Plastics AI Policy Tool with the intention of understanding how different high-level policies could reduce plastic waste.This is a real opportunity to actually quantify or estimate what the impact of some of the highest priority policies that are on the table for the treaty [is] going to be, says Neil Nathan, a project scientist at the Benioff Ocean Science Laboratory.Of the 175 nations that agreed to create the global treaty to end plastic pollution, 60 have agreed to reach that goal by 2040. Ending plastic pollution by 2040 seems like an incredibly ambitious goal. Is that even possible? asks Nathan. One of the biggest findings for us is that it actually is close to possible.The AI tool leverages historic plastic consumption data, global trade data, and population data. Machine learning algorithms, such as Random Forest, uncover historical patterns in plastic consumption and waste and project how those patterns could change in the future.The team behind the tool has been tracking the policies up for discussion throughout the treaty negotiation process to evaluate which could have the biggest impact on outcomes like mismanaged waste, incinerated waste, and landfill waste.Nathan offers the example of a minimum recycled content mandate. This is essentially requiring that new products are made with a certain percentage, in this case 40%, of post-consumer recycled content. This alone actually will reduce plastic mismanaged waste leaking into [the] environment by over 50%, he says.Its been a really wonderful experience engaging with the plastic treaty, going into the United Nations meetings, working with delegates, putting this in their hands and seeing them being able to visualize the data and actually understanding the impact of these policies, Nathan adds.AI and Product DevelopmentHow could AI impact plastic waste further upstream? Data collected and analyzed by AI systems could change how CPG companies produce plastic goods before they ever end up in the hands of consumers, waste facilities, and the environment.For example, data gathered at waste management facilities can give product manufacturers insight into how their goods are actually being recycled, or not. No two waste plants are identical, Mitra points out. If your product gets recycled in plant A, doesn't mean you'll get recycled in plant B.That insight could show companies where changes need to be made in order to make their products more recyclable.Companies could increasingly be driven to make those kinds of changes by government policy, like the European Unions Extended Producer Responsibility (EPR) policies, as well as their own ESG goals.Millions of dollars [go] into packaging design. So, whatever will come out in 25 or 26, it's already designed, and whatever is being thought [of] for 26 and 27, it's in R&D today, says Mitra. [Companies] definitely have a large appetite to learn from this and improve their packaging design to make it more recyclable rather than just experimenting with material without knowing how will they actually go through these mechanical sorting environments.In addition to optimizing the production of plastic products and packaging for recyclability, AI can hunt for viable alternatives; novel materials discovery is a promising AI application. As it sifts through vast repositories of data, AI might bring to light a material that has economic viability and less environmental impact than plastic.Plastic has a long lifecycle, persisting for decades or even longer after it is produced. AI is being applied to every point of that lifecycle: from creation, to consumer use, to garbage and recycling cans, to waste management facilities, and to its environmental pollution. As more data is gathered, AI will be a useful tool for making strides toward achieving a circular economy and reducing plastic waste.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports0 Σχόλια 0 Μοιράστηκε 61 Views
-
WWW.INFORMATIONWEEK.COMHow Will AI Shape the Future of Cloud and Vice Versa?What role does AI have in the current state of cloud? What types of cloud systems and resources stand to benefit from, or need to adapt to, AI?0 Σχόλια 0 Μοιράστηκε 60 Views
-
WWW.INFORMATIONWEEK.COMMeta Rebukes Indias WhatsApp Antitrust Ruling, Plans Legal Challenge to $25M FineThe social media giants acquisition of WhatsApp is facing growing antitrust scrutiny over data sharing between its other applications.0 Σχόλια 0 Μοιράστηκε 60 Views
-
WWW.INFORMATIONWEEK.COMData Center Regulation Trends to Watch in 2025Discover how upcoming regulations impact data center operators, from new compliance rules to key takeaways from the EUs challenges with the Energy Efficiency Directive.0 Σχόλια 0 Μοιράστηκε 46 Views
-
WWW.INFORMATIONWEEK.COMCloud Levels the Playing Field in the Energy IndustryMatt Herpich, CEO, Conduit PowerNovember 18, 20243 Min ReadAleksia via Alamy StockWe operate as a lean technology startup in the traditionally conservative energy industry. We have to. Going up against $100 billion behemoths requires agility and operational efficiency so we can make smart, quick decisions in the moment and move at the speed of the market. Technology -- specifically digital transformation in the cloud -- has enabled this bold business model, allowing us to bridge the budget gap and compete against much larger competitors that have been in business for decades.But simply declaring youre going to operate in the cloud isnt likely to lead to success. What we set out to do hadnt been done before, but we were lucky enough to be working with two industry leaders that helped us make the right technology decisions during a relatively fast implementation cycle -- the impact of which proved valuable to operations, employee productivity, and morale, especially in a market as competitive as the energy sector.Pioneering Cloud SolutionOur core mission is to build power plants for companies that want to co-locate power generation near where they need it -- for data centers, new industry, and other places that have rapidly growing electricity needs. The ability to remotely operate modern control room systems is mission critical, allowing us to meet resilience, compliance and security requirements of our customers without having to deploy people on-site at every customer plant. Data fuels our remote management capabilities, providing operators fingertip access to all kinds of information about our customers on-site grids, including generation, usage and asset health data, which is fed to a central control center near Houston, Texas.Related:Building a vast wide-area network with high-performance fiber would cost tens of millions of dollars. Some of our well-funded competitors have done this, building massive IT infrastructures across customer sites at a scale that rivals the worlds biggest tech companies. We took a different path, working with Hitachi Energy and Amazon Web Services (AWS) to create a cloud-based network management solution. Moving to the cloud led to a six-month deployment timeline and cost a third of the budget required to build a similar on-premises deployment.Our cloud strategy allows our operators to monitor and control grid assets distributed across the state from a central location and provides fast response, redundancy, disaster recovery, and security services -- all the capabilities youd expect from one of the major players in our field. By working closely with our partners, we can do this without the big budget of our competitors nor hiring or training additional personnelRelated:Keeping Families Together During a DisasterMoving to the cloud provided immediate value. Only months after migrating to the cloud, Hurricane Beryl struck the Texas coastline and disrupted power throughout the state. Our customers needed their power plants up and running at optimal capacity to mitigate the outages.Normally, we would have had to send our operators hundreds of miles on site to oversee plant recoveries -- a costly and time-consuming prospect. However, our cloud-native strategy allowed our operators to simply log on from home where they could maintain operators from a web-based dashboard. Not only did we keep our customers up and running, but we also didnt have to disrupt our workers families during the federally declared disaster.The Cloud Delivers Operational FlexibilityOperating in the energy industry as a lean startup is much easier when you leverage the power of cloud technology to create operational efficiencies, provide stellar experiences to customers and make fast, data-informed decisions that put us one step ahead of larger competitors. Through the cloud, we are able to grow our IT capabilities in line with business growth objectives. While we currently operate plants that generate less than 100 megawatts (MW) of power, well be able to scale our SCADA and network management operations to meet the needs of any sized plant in the future. Well be able to meet this demand without having to over-provision resources in advance or invest millions of dollars in an on-premises data center. And that flexibility is worth its weight in gold.Related:About the AuthorMatt HerpichCEO, Conduit PowerMatt Herpich is CEO of Conduit Power. He previously served as head of finance and operations for Arcadia Powers Texas Energy Services business unit. He came to Arcadia through the acquisition of Real Simple Energy, a Texas-based retail power brokerage, of which he was co-founder. Matt earned a BS in Electrical Engineering from Yale and an MS in Information Technology (big data focus) from Carnegie Mellon.See more from Matt HerpichNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 56 Views
-
WWW.INFORMATIONWEEK.COM6 Cloud Trends to Watch in 2025Lisa Morgan, Freelance WriterNovember 18, 20247 Min ReadYAY Media AS via Alamy StockBusiness competitiveness is driving organizations deeper into the cloud where they can take advantage of more services. Leading organizations are realizing economic benefits ranging from cost savings and deeper insights to successful innovations. Artificial intelligence is driving an increase in cloud usage.We anticipate a continued growth of a few significant cloud trends for 2025, with the rise of GenAI being a major driver, says John Samuel, global CIO and EVP at CGS (Computer Generated Solutions), a global IT and outsourcing provider. Cloud providers are heavily investing in GenAI technologies, collaborating with chip manufacturers to enhance performance and scalability. This partnership enables cloud platforms to power a growing ecosystem of downstream SaaS providers that are building solutions to allow easier adoption of AI-based solutions. As a result, GenAI is becoming a key enabler for adopting advanced AI capabilities across industries, with cloud acting as the backbone.Mike Stawchansky, chief technology officer at financial services software applications provider Finastra, warns that privacy concerns and contractual ambiguity around the rights to utilize customer data for GenAI will become more of an issue. Customers want the insights and efficiencies GenAI can deliver but may not be willing to grant more extensive access to their data.Related:Capacity issues are becoming more frequent as organizations grapple with the resource-heavy workloads that AI-powered technologies bring. Further, expansion into other cloud regions may hold businesses back as different regions present their own unique compliance and data residency challenges, says Stawchansky in an email interview. GenAI is going to continue to put pressure on businesses to be better, faster, and more efficient. Early adopters are seeing gains, so those who have not yet begun to experiment with the technology risk falling behind.Cloud security will also become more of an issue, however. Security teams will begin to harness AI assistance to automate response processes for cloud-based exposure and threat detection.The volume of exposures and threats, combined with varying experience levels in SecOps teams, means that effective remediation relies on the ability to guide team members with prescriptive remediation procedures using AI. This will see mainstream adoption in 25, says Or Shoshani, co-founder and chief executive officer at real-time cloud security company Stream.Security. Enterprises have done little to evolve their detection and response capabilities to meet the unique aspects of the cloud environment. They are relying on processes and technology designed for securing on-prem infrastructures and its insufficient. Its a combination of lack of awareness of the problem, in addition to inertia.Related:Following are some more cloud trends to watch in 2025:1. Multi- and hybrid clouds will become more commonCloud providers recognize that customers prefer to leverage multiple cloud platforms for flexibility, risk mitigation, and performance optimization. In response, they are enabling inter-cloud operability, which enables users to perform analytics and utilize data across cloud providers without moving their data, according to CGS Samuel.Enterprises [and] small- to medium-sized businesses appear well-prepared for upcoming cloud trends like GenAI adoption and multi-cloud strategies. Cloud providers are responding by enabling technologies that reduce on-premises infrastructure needs, making it easier for companies to offload workloads to the cloud, Samuel says.Faiz Khan, founder & CEO at multi-cloud SaaS and managed service provider Wanclouds, says the major public cloud providers eliminated data transfer fees over the last year, making it easier to migrate data from one public cloud provider to another.Related:"By adopting a multi-cloud approach, you can train your distributed AI workloads and models across multiple environments. For instance, there could be a benefit to using Azure's computing power to train one AI model and AWS for another. Or you could keep your legacy cloud workloads on one public cloud and then your AI workloads on a separate public cloud, says Khan in an email interview. This approach enables enterprises to tailor their cloud environment to the needs of each AI application. It's also become a lot cheaper to migrate these applications across public clouds if the environment or needs change.However, time and cost can slow adoption. Businesses need sufficient time to research and implement new cloud solutions, and the confidence that the shift will deliver the cost optimization they expect. Balancing immediate costs with long-term cloud benefits is an important consideration.2. CISOs will need better cloud monitoringSOC and the SecOps teams will need to integrate cloud context into their day-to-day detection and response operations in 2025 to effectively detect and respond to exposures and threats in real time.Most SecOps teams are still relying on alert-based tools designed for on prem environments that are missing information related to exposure and attack path across all elements of the cloud infrastructure, saysStream.SecuritysShoshani. This results in an inability to identify real threats and massive amounts of time [to investigate] false positives.3. Cloud spending will increaseWanclouds Khan says most organizations will increase their cloud spending substantially in 2025.Like other aspects of IT, AI will be the force behind most of the trends occurring in the cloud in 2025. AI is going to drive a big spending boom in the cloud next year. Organizations need to increase the amount of cloud resources they have to be able to handle the compute GenAI model training requires, says Khan. Furthermore, we're also seeing IT teams now spending on new AI tools and features that can be utilized to improve and automate cloud management."4. Landing zones will gain more tractionLanding zones provide a standardized framework for cloud adoption. They are becoming more prominent as they address scalability and security concerns.Cloud providers are putting together templates for various industry verticals, such as finance and healthcare, that will allow customers to build solutions for regulatory environments much faster, saysFinastrasStawchansky. Most enterprises will be some way along their cloud-adoption and migration roadmaps today. Its just a question of how well-equipped they are for scaling their capabilities, especially as they seek to operationalize resource-heavy technologies, such as LLMs and GenAI. Having structured ways to approach scaling resources, while efficiently harnessing this technology will be crucial for ensuring ROI.5. Cybersecurity resilience will use digital twins for ransomware war gamesCyber recovery rehearsals will reach a new level of sophistication as organizations aim for ever faster recovery times in todays hybrid and multi-cloud environments.Cyber criminals are now using AI to increase the frequency, speed and scale of their attacks. In response, organizations will also use AI -- but this time, to fight back, says Matt Waxman, SVP and GM of data protection at secure multi-cloud data management company Veritas Technologies. As we know, the key to success is all in the preparation, so much of this work is going to be done in advance, using AI to predict the best response when ransomware inevitably hits.Organizations will play out ransomware wargames using cloud-based digital twins in AI-powered simulations of every possible attack scenario across entire infrastructures -- from edge to core to cloud.Plans are one thing, but an organization cant claim resilience without proving that those plans have been pressure tested. More than a nice-to-have, these advanced rehearsals will soon become mandated by regulation, says Waxman.6. Cyberspace will extend to outer spaceSatellite connectivity is growing, though Waxman says space-based computing may get a nudge in 2025.As humans return to the moon for the first time in more than 50 years aboard NASAs Artemis II, technology visionaries will be re-inspired to explore the possibilities of space-based computing, says Waxman. Datacenters in space present many benefits. For example, the unique environmental conditions mean that much less energy is required to spin disks or cool racks. However, there are also obvious challenges, such as transmission latency, which makes storage in space more effective for data that only needs accessed occasionally, like backup data.Spurred by the promise of datacenters freed from atmospheric constraints, in 2025, visionaries will begin to set their minds to overcoming the barriers to computing in space, he says.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 55 Views
-
WWW.INFORMATIONWEEK.COMGenerative AI: Reshaping the Semiconductor Value ChainMarco Addino, Managing Director, AccentureNovember 15, 20244 Min ReadPanther Media GmbH via Alamy StockWithout doubt, todays society relies on the semiconductor industry. After all, can you imagine a world without smartphones, cars, power stations, and televisions? We, as people, and the global economy more broadly, rely on continued innovation from the chips the industry produces. But there are challenges facing these companies across the board -- design, manufacturing and demand. Talent is in increasingly short supply, and on top of that geopolitical tensions and onshore manufacturing add another layer of complexity. The industry keeps having hurdles to cross, one after another it seems. Only recently, another problem for the industry made the headlines when Hurricane Helene hit Spruce Pine, one of the worlds most important locations for semiconductor, raising questions about the impact it would have.It's already tough enough for semiconductor companies to deal with and resolve these issues, but they are appearing while generative AI has made the need for innovation a must do now, not a must do at some point. The question is whether the semiconductor industry can reinvent itself quickly enough for this new generative AI moment. Accenture analysis found reinventors (those companies that have already built the capability for continuous reinvention) increased revenues by 15 percentage points over other companies between 2019 and 2022. We expect that gap in revenue growth between reinventors and the rest to increase by 2.4 times to 37 percentage points by 2026, so theres a clear opportunity for them. Yet our survey of global semiconductor executives found that 71% believe it will take at least three years for the semiconductor industry to deploy generative AI at scale. The industry could do with that timeframe accelerating somewhat.Related:The Challenges AheadIts not going to be easy, however, of course it wont. But semiconductor companies need to use generative AI across the entire spectrum -- spanning design and manufacturing, through sales and marketing, to customer service --to seize opportunities for innovation in both the short- and long-term. Adapting that broad view across the value chain is a must to reinvent the value chain, however daunting that may initially seem.There are other concerns too, such as IP. In fact, 73% of executives cite IP concerns as the biggest barrier to generative AI deployments. Then theres of course the cost issue and the need to balance technical debt with investments for the future, both of which, are necessary.Once leaders grapple with how those challenges can be overcome, theres another pressing challenge and thats having the right talent in place to deploy these applications successfully.Related:Most semiconductor companies are already fully aware of that and are doing everything they can to accelerate gaining new talent and reskilling their existing workforce. However, the speed with which generative AI is changing the way businesses work means they must also get support from across their ecosystem to ensure they have all critical skills in place.It's Time for Leaders to Place Their BetsThe industry needs to move forward with two workstreams running in parallel. First, CEOs and other business leaders must make no regrets moves; those use cases with the lowest risk, shortest time to show results and therefore, value. For example, Generative AI-enabled field service assistants would allow field service engineers to perform root cause analyses faster and recommend repair methods based on machine data, therefore reducing downtime and accelerating production. It also provides immediate access to information that helps technicians increase their knowledge, therefore helping with the skills gap. Generative AI can also be used in other areas, such as sales and marketing where it can improve the quality and level of personalization of the content to drive more personalized campaigns.Related:At the same time, strategic bets need to be decided upon to support the long-term goals of the business. An example of this is in process engineering. Generative AI-enabled applications that incorporate historical process parameter data to create more efficient designs for semiconductor equipment and wafer development. These tools can use drawings, text, images and more to create customized outputs that engineers can use to augment experiments, allowing for a more objective approach to experimental design. These strategic bets will be the things that will offer the highest value. They may well take some time to roll out, but they could pave the way for total reinvention and therefore, competitive advantage.Whether the no regret moves or strategic bets, the guiding principle is choosing the right use cases, at the right point and at the right time. Every semiconductor companys generative AI journey is different, but the approaches will be similar. All companies must establish a solid data foundation, have the necessary skills in place, and importantly, have the right ecosystem in position. Those that come out on top wont just be the best player, but the businesses that put the right connections in place.About the AuthorMarco AddinoManaging Director, AccentureMarco Addino is a managing director in Accentures high tech industry practice leading the companys semiconductor business in EMEA, and is the client account lead for Italy, Central Europe and Greece, responsible for building and growing strategic relationships in the region. He is experienced in high complexity products engineering, supply chain and operations, large scale digital and technology transformations, organizational design, post-merger integration, and the design and implementation of platform business models.See more from Marco AddinoNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 65 Views
-
WWW.INFORMATIONWEEK.COMEdge Extending the Reach of the Data CenterCompanies are keeping their central data centers, but theyre also moving more IT to the enterprise edge. The result is a re-imagined concept of data center that includes the data center but also subsumes cloud and other edge-computing operations.In this expanded data center model, ITs role hasnt fundamentally changed. It must still implement, monitor and maintain data center operations, no matter where they occur.But since IT staff cant be at all remote locations at once, software and hardware technologies are being called upon to do the job of facilitating end-to-end data center management, no matter where that management is.Technologies to Facilitate Remote Data Center ManagementTo assist IT in managing the expanded data center, tools and technology solutions must do two key things: monitor and manage IT operations, functions and events; and automate IT operations.Here are five technologies that help:System on a chip (SOC). First conceived in the 1970s, system on a chip embeds processing, memory and, today, even security and artificial intelligence on a single chip. The chip powers a device or network endpoint.SOC can appear in a router, sensor, smartphone, wearable, or any other Internet of Things (IoT) device. The original selling point of SOCs was their ability to offload processing from the central data center and reduce latency when processing can be done locally.Related:Now, these SOC routers, devices, and access points come with embedded security that is WPA2/3 compliant and can encrypt data and block DNS attacks or suspicious websites. That security is complemented with AI that aids in threat detection and in some cases, threat mitigation, such as being able to automatically shut down and isolate a detected threat.To use SOC threat detection and mitigation at the edge, IT must:Ensure that the security ruleset on edge devices is in concordance with corporate-wide data center security policies; andEmploy an overarching network monitoring solution that can integrate the SOC-based security with central data center security and monitoring so every security action can be observed, analyzed, and mitigated from a single pane of glass in the central data center.Zero-trust networks. Zero-trust networks trust no one with unlimited access to all network segments, systems, and applications. In the zero-trust scheme, employees only gain access to the IT resources they are authorized for.Users, applications, devices, endpoints, and the network itself can be managed from a central point. Internal network boundaries can be set to allow only certain subsets of users access. An example is a central data center in Pittsburgh with a remote manufacturing plant in Phoenix. A micro network can be defined for the Phoenix plant that can only be used by the employees in Phoenix. Meanwhile, central IT has full network management, monitoring, and maintenance capability without having to leave the central data center in Pittsburgh.Related:Automated operations. Data and system backups can be automated for servers deployed at remote points, whether these backups are ultimately rerouted to the central data center or a cloud service. Other IT functions that can be automated with guidance from an IT ruleset include IT resource provisioning and de-provisioning, resource optimization, and security updates that are automatically pushed out for multiple devices.Its also possible to use remote access software that allows IT to gain control of a users remote workstation to fix a software issue.Edge data centers.Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift.Related:Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed.In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. The catch is that there are still many sites that manage their IT with a hodgepodge of different types of management software.A single pane of glass. At some point, those IT departments with multiple network monitoring software packages will have to invest in a single, umbrella management system for their end-to-end IT. This will be necessary because the expanding data center is not only central, but that could be in places like Albuquerque, Paris, Singapore and Miami, too.ITs end goal should be to create a unified network architecture that can observe everything from a central point, facilities automation, and uses a standard set of tools that everybody learns.Are We There Yet?Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this uber management network concept.The rise of remote work and the challenge of managing geographically dispersed networks have driven the demand for network management system (NMS) solutions with robust remote capabilities, reports Global Market Insights, adding, As enterprises increasingly seek remote network management, the industry is poised for substantial growth.0 Σχόλια 0 Μοιράστηκε 64 Views
-
WWW.INFORMATIONWEEK.COMBuilding an Augmented-Connected WorkforceJohn Edwards, Technology Journalist & AuthorNovember 15, 20245 Min ReadSasin Paraksa via Alamy Stock PhotoIn their never-ending quest to improve efficiency and productivity, a rapidly growing number of enterprises are currently building, or planning to build, augmented-connected workforces. An augmented-connected workforce allows humans and machines to work together in close partnership. The goal is people and devices functioning more productively and efficiently than when working in isolation.An augmented-connected workforce can be defined as a tech-enabled workforce of humans that have access to next-generation technologies, such as AI, IoT, and smart devices, to do their day-to-day jobs, says Tim Gaus, a principal and smart manufacturing business leader with Deloitte Consulting, in an online interview. "These technologies add a level of intelligence and efficiency for employees by providing skills that humans dont possess while allowing workers to focus on higher level, strategic work." In general, augmented-connected workforces allow for a more dynamic, connected work environment that prepares human team members to work seamlessly with high technology devices.Building the CaseToday's workforce is moving rapidly toward an integrated, interconnected ecosystem of workers and technology. "By evolving our mindset on what a workforce is, it becomes clear that an augmented-connected workforce provides the most potential," Gaus says.Related:An augmented-connected workforce's benefits vary significantly depending on the type of augmentation being applied, says Melissa Korzun, vice president of customer experience operations at technology services firm Kantata. On the whole, however, it can reduce errors, decrease costs, improve quality, and even contribute to safer working conditions in manufacturing sectors, she notes in an email interview.Other potential benefits include faster training and upskilling, improved safety, enhanced efficiency, and better cost management. "In manufacturing, for example, as businesses look to expand production capabilities, using innovative tools designed for workers can help streamline processes, leading to faster time-to-market," Gaus explains.Korzun notes that in the business sector an augmented-connected workforce promises to build significant administrative efficiency. It can, for example, reduce the time needed to process large volumes of information while creating the ability to summarize unstructured data sets. Companies that take advantage of these new assistive capabilities will benefit from improved productivity, increased quality, and less burnout in their workforce, she says.Related:As organizations continue to scale their augmented-connected workforces, additional benefits are likely to emerge. "Life sciences, for example, has seen a huge benefit in leveraging computers to expedite data analysis and then pairing humans to use these discoveries to create new therapies for diseases," Gaus says. He expects that many other discoveries will emerge across industries over time, leading to innovations as well as new opportunities to engage customers.Virtual AssistanceAn augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview.An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains.Related:Building a PlanTo keep pace with competitors, businesses should develop a comprehensive strategy for utilizing new technologies, including establishing a cross-functional team that's dedicated to identifying critical areas where technology augmentation can help solve core business challenges, Korzun says. "There are lots of shiny objects out there to chase right now -- focus on applying new tech capabilities to your most critical business issues." To assist with planning, she advises IT leaders to talk with their vendors about their current augmented-connected workforce technologies and their roadmaps for the future.For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success. "Innovative tools are the starting point, but finding ways to make human operations more efficient will lead to true impact."Final ThoughtsWhile many enterprises have already begun integrating emerging technologies into routine tasks, innovation alone without considering the role humans will play within the new model can lead to slower progress in an augmented-connected model, Gaus warns. "Humans are much more likely to engage with and utilize technology they understand and trust." The other piece of the puzzle is ensuring that workers are appropriately skilled in the new technologies entering the business.Businesses must continue to embrace technology and digital transformation in order to build the most dynamic workforce possible, Gaus states. "Doing so will maximize their technology investment and create a more connected, reliable workforce."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 57 Views
-
WWW.INFORMATIONWEEK.COMTSMC Secures $6.6B as Biden Administration Races to Dole Out CHIPS Act FundsWith uncertainty about how a new Trump Administration will handle the $52.7 billion program, the outgoing administration is under pressure to make good on one of its signature legislative wins.0 Σχόλια 0 Μοιράστηκε 64 Views
-
WWW.INFORMATIONWEEK.COMShedding Light on Your Shadow ITMario Platt, Vice President and CISO, LastPassNovember 14, 20244 Min ReadElly Miller via Alamy StockShadow IT has long been a problem for companies, from personal devices brought into the workplace to untested software installed inside the perimeter. As companies have moved to cloud, the problem has only become more tangled: Well-meaning employees set up unsanctioned services, and technical teams use unapproved cloud services to add functionality to their projects.Plus, remote employees and their mashup of consumer and pro-sumer technologies bring less visibility and more risks into the IT-security equation.According to HashiCorp's 2024 study, only 8% of companies had highly mature" practices across both infrastructure and security lifecycle management. Add to that mix the chaos of a merger or divestiture, and problems can grow quickly. The blending of two technology platforms in a merger or the breaking apart of common infrastructure in a divestiture likely leads to breakage and the loss of security oversight.Managing shadow IT is an ongoing challenge that requires a combination of technical controls, governance processes, and cultural change to address it effectively. Here are three ways that companies can get a handle on shadow IT.1. SSO is necessary, but far from sufficient. A common way to gain visibility into cloud and on-premises services is to rely on single sign-on (SSO) platforms to know which applications and services employees are using. The challenge, however, is that not every application is SSO-enabled, especially cloud or mobile applications on employees personal devices that are often used for work.Related:Separations and divestitures produce duplicates of most critical services, new devices for employees, and the need for a revamp of all security controls, as a company moves from legacy services to a new platform. During these times, detection, analysis and response to threats (DART) can be particularly challenging.The lesson for corporate security teams is not only to gain visibility, but to create a backend process that educates employees and diverts them from non-approved risky applications to approved platforms.2. Assets must be discovered across hybrid infrastructure. Another challenge is the proliferation of remote and mobile workers, whose devices -- often poorly managed -- exist in home offices or often connect from the road.For in-house workers, companies have default control over on-premises technology, even if that technology is non-sanctioned shadow IT. To help manage remote technology, companies should have agents on any device connecting to a corporate cloud service or using a virtual private network. Such security can be sufficient, depending on how your company implements the defenses and checkpoints.Related:During a merger, organizations must gain clear visibility of all IT assets across the new enterprise and enforce a zero-trust approach to any access to sensitive corporate data. During a separation, organizations may lose visibility of devices and applications, resulting in shadow IT and potential vectors of attack.The transition to remote work caused by the coronavirus pandemic forced many companies to switch to secure web gateways to enforce policies with in-house and remote employees. Companies should focus on additional zero-trust security measures to enforce security policies even when employees are outside of the corporate firewall.3. Cultural changes are necessary. Organizations must make sure that every cloud service supports their mission of security, and no technology is unmanaged. This is especially true during challenging events, such as a merger or divestiture.Shadow IT comes from a culture that treats the security teams as gatekeepers that can be evaded. According to software supply-chain firm Snyk, more than 80% of companies have developers skirting security policies and using AI code completion tools to generate code. ChatGPT and other large language models (LLMs) became the top shadow IT in 2023, months after release.Related:Companies need to show employees why security is necessary to keep the business running and what the consequences could be if that focus is lost. Keeping that focus is admittedly difficult, especially when companies often go through a cycle of alternately emphasizing security and cost savings.Effective management of shadow IT calls for a combination of strong technical measures and cultivating a culture of security awareness, thereby reducing the risks associated with unapproved tools and services. In times of rapid digital transformation, especially during mergers and divestitures, creating a flexible IT infrastructure that adapts to change is key to safeguarding security and maintaining trust across the business.About the AuthorMario PlattVice President and CISO, LastPassMario Platt is an accomplished, highly respected and innovative information security expert, with a multi-faceted track record of expertise ranging from penetration testing, operations, product management, design authority, risk management and governance; with success in attaining and maintaining compliance through security frameworks, across telecommunications, retail, healthcare and public sector organizations throughout the last 15+ years.See more from Mario PlattNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 55 Views
-
WWW.INFORMATIONWEEK.COMWhat Could the Trump Administration Mean for Cybersecurity?The results of the 2024 US presidential election kicked off a flurry of speculation about what changes a second Donald Trump administration will bring in terms of policy, including cybersecurity.InformationWeek spoke to three experts in the cybersecurity space about potential shifts and how security leaders can prepare while the industry awaits change.Changes to CISAIn 2020, Trump fired Cybersecurity and Infrastructure Security Agency (CISA) Director Christopher Krebs after he attested to the security of the election, despite Trumps unsupported claims to the contrary. It seems that the federal agency could face a significant shakeup under a second Trump administration.The Republican party believes that agency has had a lot of scope creep, AJ Nash, founder and CEO of cybersecurity consultancy Unspoken Security, says.For example, Project 2025, a policy playbook published by conservative think tank The Heritage Foundation, calls to end CISAs counter-mis/disinformation efforts. It also calls for limits to CISAs involvement in election security. The project proposes moving the CISA to the Department of Transportation.Trump distanced himself from Project 2025 during his campaign, but there is overlap between the playbook and the president-elects plans, the New York Times reports.Related:I think it safe to say that CISA is going to have a lot of changes, if it exists at all, which I think [is] challenging because they have been very responsible for both election security and a lot of efforts to curb mis-, dis- and malinformation, says Nash.AI Executive OrderIn 2023, President Biden signed an executive order regarding AI and major issues that arose in the wake of its boom: safety, security, privacy, and consumer protection. Trump plans to repeal that order.We will repeal Joe Bidens dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing, according to a 2024 GOP Platform document.Less federal oversight on the development of AI could lead to more innovation, but there are questions about what a lack of required guardrails could mean. AI, how it is developed and used, has plenty of ramifications to cybersecurity and beyond.The tendency of generative AI to hallucinate or confabulate that's the concern, which is why we have guardrails, points out Claudia Rast, chair of the intellectual property, cybersecurity, and emerging technology practice at law firm Butzel Long.Related:While the federal government may step back from AI regulation, that doesnt mean states will do the same. You're going to see California [and] Texas and other states taking a very proactive role, says Jeff Le, vice president of global government affairs and public policy at cybersecurity ratings company SecurityScorecard.California Governor Gavin Newsom signed several bills relating to the regulation of GenAI. A bill -- the Texas Responsible AI Governance Act (TRAIGA) -- was introduced in the Lone Star State earlier this year.Cybersecurity RegulationThe Trump administration is likely to roll back more cybersecurity regulation than it will introduce. I fully anticipate there to be a significant slowdown or rollback on language or mandated reporting, incident reporting as a whole, says Le.Furthermore, billionaire Elon Musk and entrepreneur Vivek Ramaswamy will lead the new Department of Government Efficiency, which will look to cut back on regulation and restructure federal agencies, Reuters reports.But enterprise leaders will still have plenty of regulatory issues to grapple with. They'll be looking at the European Union. They'll be looking at regulations coming out of Japan and Australia they'll also be looking at US states, says Le.That's going to be more of a question of how they're going to navigate this new patchwork.Related:Cyber Threat ActorsNation state cyber actors continue to be a pressing threat, and the Trump administration appears to be planning to focus on malicious activity coming out of China, Iran, North Korea, and Russia.I do anticipate the US taking a more aggressive stance, and I think that's been highlighted by the incoming national security advisor Mike Waltz, says Le. I think he has made a point to prioritize a more offensive role, and that's with or without partners.Waltz (R-Fla.) has been vocal about combatting threats from China in particular.Preparing for ChangePredicting a political future, even just a few short months away, is difficult. With big changes to cybersecurity ahead, what can leaders do to prepare?While uncertainty prevails, enterprise leaders have prior cybersecurity guidelines at their fingertips today. It's time to deploy and implement the best practices that we all know are there and [that] people have been advising and counseling for years at this point, says Rast.0 Σχόλια 0 Μοιράστηκε 57 Views
-
WWW.INFORMATIONWEEK.COMWhy CIOs Must Lead the Charge on Sustainable TechnologyHiren Hasmukh, CEO of TeqtivityNovember 13, 20244 Min ReadKanawatTH via Alamy StockEvery week, I meet CIOs who tell me the same story: Environmental sustainability has moved from their wish list to their priority list. Regulatory pressures demand they track carbon emissions. Boards expect detailed reports on energy usage. Customers scrutinize their sustainability practices. This puts leaders in a tough position -- in order to remain competitive in the marketplace, we must continue to keep up with advancing technology. But how do we stay sustainable in doing so?The New Reality of Sustainable TechnologyGreen technology isn't just about reducing environmental impact -- it's about rethinking how we deliver IT services. Instead of asking ourselves how to save energy, we must ask ourselves larger questions. How can sustainable IT drive innovation? How can it create a competitive advantage? The challenge isn't whether to act, but how to begin.The Value of Green ITMany executives believe that green IT is only about saving money. However, cost savings are only one aspect of sustainable technological practices. Let's break down the real business impact:Immediate cost reduction. Energy costs typically represent 40-60% of a data center's operating expenses. Organizations implementing efficient power management often see utility bills drop within the first quarter. But that's just the beginning.Related:Extended asset value. Smart lifecycle asset management reduces e-waste and impacts the balance sheet. When organizations move from reactive to proactive maintenance, they often discover their technology investments can deliver value for years longer than expected.Risk mitigation. With environmental regulations tightening globally, companies investing in sustainable technology are now better positioned to avoid future penalties and compliance costs.Competitive advantage. The Business of Sustainability study reported 78% of consumers want to buy from environmentally friendly organizations. Companies that commit to strong environmental practices will attract both more clients and talent.Moving from Vision to ActionThe business case for sustainable technology is clear. Here are a few ways your team can get started with building a more sustainable IT infrastructure:Start with data center efficiency: Heres a startling fact: Research shows that almost a third of data center servers are considered zombies --meaning they consume power while serving no purpose. Why does that happen? Poor documentation means nobody knows what to turn off. IT teams should implement automated tracking systems to map every asset's purpose and usage. An automated process will help further eliminate these zombies and optimize remaining systems.Embrace the cloud strategically: Major cloud providers have invested billions in renewable energy and efficient data centers, making them an attractive option for sustainable IT. However, using cloud solutions requires strategy. Teams should map their workloads carefully -- some applications deliver better environmental and business outcomes on-premises or in hybrid environments.Rethink device lifecycles: Many organizations default to replacing devices every three years, regardless of whether they need to or not. Companies can significantly extend the lifecycles of their devices through proactive maintenance and matching device capabilities to user requirements. This reduces e-waste while delivering substantial cost savings.Related:Building a Culture of SustainabilityOrganizations should also create a culture that embraces these practices wholeheartedly. Here's what works:Start with why: Help employees understand the environmental impact of technology choices. When teams understand how their daily decisions within the company affect the environment, they become partners in the solution.Related:Make it measurable: Set achievable energy reduction and sustainable practices goals. Track and share progress regularly. What gets measured gets managed.Celebrate progress: Recognize teams and individuals who champion sustainable practices. Success stories inspire others and build momentum for broader changes.The Path ForwardAs technology leaders, we stand at a crucial intersection. The decisions we make today about our IT infrastructure will impact our planet for years to come.Most importantly, our teams are ready for change. Theyre looking to us for leadership on sustainability. Every day we wait is a missed opportunity to drive value, reduce costs, and make a meaningful environmental impact.The question isn't whether to embrace sustainable technology -- it's how quickly we can make it happen. The tools exist. The business case is clear. The time for CIOs to lead this charge is now.About the AuthorHiren HasmukhCEO of TeqtivityHiren Hasmukh is the CEO and founder of Teqtivity, a leading IT Asset Management solutions provider. With over two decades of experience in the technology sector, Hiren has been at the forefront of developing innovative ITAM strategies for businesses navigating the complexities of digital transformation. Under his leadership, Teqtivity has evolved from a smart locker concept to a comprehensive ITAM solution serving companies of all sizes.See more from Hiren HasmukhNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 53 Views
-
WWW.INFORMATIONWEEK.COMWhere IT Consultancies Expect to Focus in 2025In the past few years, artificial intelligence has dominated New Years predictions. While the same can be said about 2025, scalability, responsibility, and safety will be stronger themes.For example, global business and technology consulting firm West Monroe Partners sees data and data governance being major focus areas.Its no longer just about quick wins or isolated use cases. The focus is shifting towards building robust data platforms that can support long-term business goals as they move forward, says Cory Chaplin, technology and experience practice leader at West Monroe. A key part of this evolution is ensuring that organizations have the right data foundation in place which in turn allows them to harness the full potential of advanced uses like analytics and AI.Efforts Will Focus on Responsible and Safe UseGenAI has caught the attention of boards and CEOs, but its success hinges on having clean, accessible data.Much of whats driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities, says Chaplin. AI is part of this equation, but data remains the foundation that everything else builds upon.West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge.Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them, says Chaplin. As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.Right now, organizations face significant challenges keeping pace with rapid technological advancements, especially with AI evolving so quickly. While many organizations have built substantial product and data teams, their ability to adapt and innovate at business speed often falls short.Cory Chaplin, West MonroeIts not just about having the right headcount. Its about the capacity to move quickly and embrace new technologies, says Chaplin. Even with skilled talent, internal teams can get bogged down by established processes and pre-existing organizational structures. The demand for specialized expertise in AI and data-driven fields continues to outpace supply, complicating their transformation journeys. This is where we provide the support needed to challenge existing paradigms and accelerate their progress.Over the last few years, there has been a gap between expectations and progress. Despite the hype surrounding AI, data, and new technologies, many organizations have struggled to realize the full value of their investments, irrespective of industry.Organizations are tired of chasing buzzwords, says Chaplin. They want AI to be a productive part of their operations, working behind the scenes to enhance existing platforms, support their teams and drive growth. They [also] want help embedding AI into their current operations, ensuring that its not just another shiny tool, but a core driver of growth and efficiency within existing business operations.AI Plus ModernizationThe demand for AI/ML and GenAI is growing across industries, particularly in areas like automation, predictive analytics, and personalized customer experiences. Data and analytics remain crucial as businesses aim to harness their data to make smarter, faster decisions. Cloud and application modernization are also essential as many organizations want to update legacy systems, improve agility, and adopt cloud-native technologies.Many clients need help with scalability, technology integration and data modernization. They may need help with outdated systems, underutilized data or the complexities of adopting new technologies, particularly in highly regulated industries like life sciences and energy, says Stephen Senterfit, president of enterprise business consultancy Smartbridge, Additionally, the rapid pace of innovation can make it hard for businesses to know where to focus their resources.With this help, enterprises should see improved operational efficiencies, better data-driven decision-making and more robust customer engagement. They will also be able to scale rapidly, remain competitive in their respective industries and innovate in ways that were previously out of reach.Smartbridge's relationship with clients is evolving from technology service provider to strategic partner, says Senterfit. Clients expect us to help them navigate broader digital strategies, advise them on tech implementation and innovation roadmaps, and future-proof their business models.AI-Related Change Management and UpskillingAs AI continues to become increasingly mainstream, theres a growing demand for organizational design, change management, and upskilling services designed to get more out of new ways of working and managing organizational shifts.Clients are increasingly asking, How do we build AI into our business? says West Monroes Chaplin. This isnt just about implementing new technologies, its about preparing the workforce and the organization to operate in a world where AI plays a significant role. Theres momentum building around this intersection of organizational design, change management, and upskilling -- helping companies function effectively in an AI-driven environment.CybersecurityAs businesses adopt AI, use more data, and deploy new emerging technologies, cybersecurity becomes even more critical.With increased platform adoption, securing confidential information is paramount. We see a renewed emphasis on secure software development practices and tighter controls on AI/ML model usage, ensuring protection as organizations scale their AI initiatives, says Chaplin. By understanding and utilizing their data effectively, organizations can foster a culture where data-driven insights inform decision-making processes. This not only enhances operational efficiency but also drives innovation across the business.Organizations should consider data a critical asset that requires attention and strategic use. This requires a mindset shift that can lead to improved outcomes across various functions, particularly in cybersecurity, where organizations can de-risk their operations even amid rapid changes.With platform-enabled environments, organizations can reduce their reliance on fully custom solutions. By leveraging existing platforms and their roadmaps, companies can enhance their agility and speed of implementation, says Chaplin. This approach allows for a greater emphasis on building from proven solutions rather than creating from scratch, ultimately facilitating quicker adaptations to market demands.Data and Customer FocusAs companies increasingly focus on digital transformation, data-driven decision-making and improving customer engagement, they look to consultancies for help.Our data engineering practice will play a central role in helping businesses migrate from legacy systems to the cloud, a significant challenge for many organizations as they modernize their analytical workloads, says Alex Mazanov, CEO at full service consulting firm T1A. By 2025, we anticipate an even greater demand for scalable, cloud-based data architectures capable of handling vast amounts of real-time data. Many organizations are moving away from outdated legacy systems, such as SAS, to modern cloud platforms like Databricks.Continued data explosion, combined with AI advances, is pushing companies to modernize their data infrastructure.Businesses are increasingly challenged to make faster, smarter decisions, and well provide the tools and expertise to architect solutions that scale with their needs, ensuring data is a true asset rather than a burden, says Mazanov. Additionally, transitioning to open-source platforms and government-compliant technologies will help businesses stay agile, cost-efficient and aligned with regulatory demands.AI is also becoming more prevalent in CRM scenarios because it increases productivity, reduces costs, and helps maximize customer lifetime value. Specifically, they want to enhance loyalty programs, improve customer retention and use data analytics to predict behavior across the entire customer lifecycle.Optimizing the customer journey will continue to be crucial in 2025, as businesses will increasingly focus on maximizing customer lifetime value [using] advanced tools and strategies to improve every touchpoint in the customer journey, says Mazanov. Many companies struggle to optimize this.Finally, process intelligence will be even more critical by 2025, as companies continue to streamline operations, reduce inefficiencies, and cut costs in an increasingly competitive market. AI and machine learning will be used to automate and optimize business processes. As industries move toward hyper-automation, Mazanov says clients will need to become more agile and efficient.Organizations are constantly seeking ways to reduce operational costs while improving efficiency, he says. By 2025, companies will face rising expectations to do more with less, and process intelligence will be a vital tool to achieve this. Our solutions will focus on creating smarter, more efficient workflows, powered by AI to reduce manual tasks and human error.Stephen Senterfit, SmartbridgeMany organizations are experiencing the dichotomy of being challenged by the complexity of their data and needing real-time insights. Meanwhile, customer expectations continue to grow.[Our relationship with clients [is] evolving from being a service provider to a strategic partner. By 2025, we anticipate playing a more consultative role, helping clients not just implement technology but also reimagine their business models around data and AI, says Mazanov. Well be focused on long-term partnerships, co-creating innovative solutions that align with their broader business strategy.Get Help When You Need ItCompanies have many different reasons for seeking outside assistance. Sometimes the engagement is tactical and sometimes its strategic. The latter is becoming more common because it drives more value.One of the least valuable engagements is hiring a consultancy to solve a problem without internal involvement. When the consultants conclude their arrangement, considerable valuable knowledge may be lost. Working as a partner results in greater transparency and continuity.One benefit of using consultants, not mentioned above but critically important, is insight clients may lack, such as having a deep understanding of how emerging technology is utilized in the clients particular industry and whats worked best for other industries and why, which can result in important insights and innovations. They also need to understand the clients business goals so that IT implementations deliver business value.0 Σχόλια 0 Μοιράστηκε 54 Views
-
WWW.INFORMATIONWEEK.COMFrom Declarative to Iterative: How Software Development is EvolvingLisa Morgan, Freelance WriterNovember 12, 20246 Min ReadDragos Condrea via Alamy StockSoftware development is an ever-changing landscape. Over the years, it has become easier to generate high-quality code faster, though the definition of faster is a moving target.Take low-code tools, for example. With them, developers can build most of the functionality they need with the platform, so they only need to write the custom code the application requires. Low-code tools have also democratized software development -- particularly with the addition of AI.GenAI is accelerating development even further, and its changing the way developers think about code.Siddharth Parakh, senior engineering manager at Medable, expects Ai to revolutionize productivity.The ability for AI to automate repetitive tasks, refactor code and even generate solutions from scratch would allow developers to focus on higher-order problem-solving and strategic design decisions, says Parakh in an email interview. With AI handling routine coding, developers could become orchestrators of complex systems rather than line-by-line authors of software.But theres a catch: Currently, AI-generated code cannot fully replace human intuition in areas such as creative problem solving, contextual understanding, and domain-specific decision-making. Also, AI models are only as good as the data they are trained on, which can lead to bias issues, error propagation or unsafe coding practices, he says. Quality control, debugging, and nuanced decision-making are still areas where human expertise is necessary.Related:How AI HelpsThe operative work is automation.If AI takes over the majority of coding tasks, it would drive unprecedented efficiency and speed in software development, says Medables Parakh. Teams could iterate faster, adapt to changes more fluidly and scale projects without the traditional bottlenecks of manual coding. This could democratize software development, enabling non-experts to create functional software with minimal input.Geoffrey Bourne, co-founder of social media API company Ayrshare, says GenAI coding assistants are now an integral part of his coding.They produce lines of code which save me hours on a weekly basis. But, although the results are improving, theyre correct less than 40% of the time. You need the experience to know the code just isnt up to scratch and needs adjusting or a redo, says Bourne in an email interview. Newbie coders are starting out with these assistants at their fingertips but without the years of experience writing code their seniors have. Weve got to take this into account and not necessarily limit their access but find creative ways to inject that knowledge. You need to find a balance [between] the instant code fix with healthy experience and a critical eye.Related:The evolution of programming, especially through abstraction layers and GenAI, has significantly transformed the way Surabhi Bhargava, a machine learning tech leadat Adobe, approaches her work.GenAI has made certain aspects of development much faster. Writing boilerplate code, prototyping and even debugging is now more streamlined. Finding information across different documents is easier with AI and copilots, says Bhargava in an email interview. [Though] AI can speed things up, I now [must] critically assess AI-generated outputs. It has made me more analytical in reviewing the work produced by these systems, ensuring it aligns with my expectations and needs, particularly when handling complex algorithms or compliance-driven work.AI tools are also helping her create rapid prototypes and theyre reducing the cognitive load.I can focus more on strategic thinking, which improves productivity and gives me room to innovate, says Bhargava. Sometimes, its tempting to lean too heavily on AI for code generation or decision-making. AI-generated solutions arent always optimized or tailored for the specific needs of a project, resulting in bugs and issues in prod. [And] sometimes, it takes more time to set it up if the tools are complex to use.Related:Hands-Free Coding Still Hasnt ArrivedAt present, AI struggles with its own set of issues such as misinterpretation, hallucination and incorrect facts. Over-reliance on AI-generated code could lead to a lack of deep technical expertise in development teams.With humans less involved in the nitty-gritty of coding, we could see a decline in the essential skills needed to debug, optimize, or creatively problem-solve at a low level. Additionally, ethical and security concerns could arise as AI systems might unknowingly introduce vulnerabilities or generate biased solutions, says Parakh. Tom Taulli, author of AI-Assisted Programming: Better Planning, Coding, Testing, and Deployment has been using AI-assisted programming tools for the past couple years. This technology has had the most transformative impact by far on his work in his over 40-year work history.Whats interesting is that I approach a project in terms of natural language prompts, not coding or doing endless searchers on Google and StackOverflow. In fact, I set up a product requirements document that is a list of prompts. Then, I go through each one for the development of an application, says Taulli. These systems are far from perfect. But it only takes a few seconds to generate the code -- and this means I have more time to review it and make iterations.Taulli has been a backend developer primarily, but AI assisted programming has allowed him to do more front-end development.The funny thing is that one of the biggest drawbacks is the pace of innovation with these tools. It can be tough to keep up with the many developments, says Taulli. True, there are other well-known disadvantages, such as with security and intellectual property. Is the code being copied? Do you really own the code you create? says Taulli. However, I think one of the biggest drawbacks is the context window. Basically, the LLMs cannot understand the large codebases. This can make it difficult for sophisticated code refactoring..Another issue is the cut-off date of the LLMs. They may not have the latest packages and frameworks, but the benefits outweigh the drawbacks, he says.Tom Jauncey, head nerd at digital marketing agency Nautilus Marketing, says GenAI tools like GitHub Copilot have accelerated the coding process by letting him think about high-level architecture and design. His advice is to use AI to save time on boilerplate code and documentation.Some of the things that I had to learn were how to prompt AI tools and think critically about their output. It is important to remember that while AI is great at generative code, it doesn't always understand broader context and business requirements, says Jauncey. Thus, always cross-check the AI-generated code with official documentation. AI-powered tools ease the effort of exploring a new language or framework without having to go into syntax details.Edward Tian, CEO of GPTZero, believes its better to use GenAI to assist coding rather than relying on it entirely.Personalization is such a key aspect of coding, and GenAI sometimes just cant quite personalize things in the way you want. It can certainly create complicated code, but it just often falls short in terms of uniqueness, says Tian.Bottom LineGenAI is accelerating development by generating code quickly but beware of its limitations. While its good for writing boilerplate code and documentation, creating quick prototypes and debugging, its important to verify the outputs. Prompt engineering skills also help boost productivity.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 56 Views
-
WWW.INFORMATIONWEEK.COMUnicorn AI Firm Writer Raises $200M, Plans to Challenge OpenAI, AnthropicThe company -- taking direct aim at OpenAI, Anthropic, and other incumbents in the GenAI arms race -- plans to use the funding to fuel its agentic AI efforts.0 Σχόλια 0 Μοιράστηκε 54 Views
-
WWW.INFORMATIONWEEK.COMHow IT Can Show Business Value From GenAI InvestmentsNishad Acharya, Head of Talent Network, TuringNovember 11, 20244 Min ReadNicoElNino via Alamy StockAs IT leaders, were facing increasing pressure to prove that our generative AI investments translate into measurable and meaningful business outcomes. It's not enough to adopt the latest cutting-edge technology; we have a responsibility to show that AI delivers tangible results that directly support our business objectives.To truly maximize ROI from GenAI, IT leaders need to take a strategic approach -- one that seamlessly integrates AI into business operations, aligns with organizational goals, and generates quantifiable outcomes. Lets explore advanced strategies for overcoming GenAI implementation challenges, integrating AI with existing systems, and measuring ROI effectively.Key Challenges in Implementing GenAIIntegrating GenAI into enterprise systems isnt always straightforward. There are several hurdles IT leaders face, especially surrounding data and system complexity. Data governance and infrastructure. AI is only as good as the data its trained on. Strong data governance enforces better accuracy and compliance, especially when AI models are trained on vast, unstructured data sets. Building AI-friendly infrastructure that can handle both the scale and complexity of AI data pipelines is another challenge, as these systems must be resilient and adaptable.Related:Model accuracy and hallucinations. GenAI models can produce non-deterministic results, sometimes generating content that is inaccurate or entirely fabricated. Unlike traditional software with clear input-output relationships that can be unit-tested, GenAI models require a different approach to validation. This issue introduces risks that must be carefully managed through model testing, fine-tuning, and human-in-the-loop feedback.Security, privacy, and legal concerns. The widespread use of publicly and privately sourced data in training GenAI models raises critical security and legal questions. Enterprises must navigate evolving legal landscapes. Data privacy and security concerns must also be addressed to avoid potential breaches or legal issues, especially when dealing with heavily regulated industries like finance or healthcare.Strategies for Measuring and Maximizing AI ROIAdopting a comprehensive, metrics-driven approach to AI implementation is necessary for assessing your investments business impact. To ensure GenAI delivers meaningful business results, here are some effective strategies:Define high-impact use cases and objectives: Start with clear, measurable objectives that align with core business priorities. Whether its improving operational efficiency or streamlining customer support, identifying use cases with direct business relevance ensures AI projects are focused and impactful.Quantify both tangible and intangible benefits: Beyond immediate cost savings, GenAI drives value through intangible benefits like improved decision-making or customer satisfaction. Quantifying these benefits gives a fuller picture of the overall ROI.Focus on getting the use case right, before optimizing costs: LLMs are still evolving. It is recommended that you first use the best model (likely most expensive), prove that the LLM can achieve the end goal, and then identify ways to reduce cost to serve that use case. This will make sure that the business need is not left unmet.Run pilot programs before full rollout: Test AI in controlled environments first to validate use cases and refine your ROI model. Pilot programs allow organizations to learn, iterate, and de-risk before full-scale deployment, as well as pinpoint areas where AI delivers the greatest value, learn, iterate, and de-risk before full-scale deployment.Track and optimize costs throughout the lifecycle: One of the most overlooked elements of AI ROI is the hidden costs of data preparation, integration, and maintenance that can spiral if left unchecked. IT leaders should continuously monitor expenses related to infrastructure, data management, training, and human resources.Continuous monitoring and feedback: AI performance should be tracked continuously against KPIs and adjusted based on real-world data. Regular feedback loops allow for continuous fine-tuning, ensuring your investment aligns with evolving business needs and delivers sustained value. Related:Overcoming GenAI Implementation RoadblocksRelated:Successful GenAI implementations depend on more than adopting the right technologythey require an approach that maximizes value while minimizing risk. For most IT leaders, success depends on addressing challenges like data quality, model reliability, and organizational alignment. Heres how to overcome common implementation hurdles:Align AI with high-impact business goals. GenAI projects should directly support business objectives and deliver sustainable value like streamlining operations, cutting costs, or generating new revenue streams. Define priorities based on their impact and feasibility.Prioritize data integrity. Poor data quality prevents effective AI. Take time to establish data governance protocols from the start to manage privacy, compliance, and integrity while minimizing risk tied to faulty data.Start with pilot projects. Pilot projects allow you to test and iterate real-world impact before committing to large-scale rollouts. They offer valuable insights and mitigate risk.Monitor and measure continuously. Ongoing performance tracking ensures AI remains aligned with evolving business goals. Continuous adjustments are key for maximizing long-term value.About the AuthorNishad AcharyaHead of Talent Network, TuringNishad Acharya leads initiatives focused on the acquisition and experience of the 3M global professionals on Turing's Talent Cloud. At Turing, he has led critical roles in Strategy and Product that helped scale the company to a Unicorn. With a B.Tech from IIT Madras and an MBA from Wharton, Nishad has a strong foundation in both technology and business. Previously, he led strategy & digital transformation projects at The Boston Consulting Group. Nishad brings a passion for AI and expertise in tech services coupled with extensive experience in sectors like financial services and energy.See more from Nishad AcharyaNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 56 Views
-
WWW.INFORMATIONWEEK.COMGetting a Handle on AI HallucinationsJohn Edwards, Technology Journalist & AuthorNovember 11, 20244 Min ReadCarloscastilla via Alamy Stock PhotoAI hallucination occurs when a large language model (LLM) -- frequently a generative AI chatbot or computer vision tool -- perceives patterns or objects that are nonexistent or imperceptible to human observers, generating outputs that are either inaccurate or nonsensical.AI hallucinations can pose a significant challenge, particularly in high-stakes fields where accuracy is crucial, such as the energy industry, life sciences and healthcare, technology, finance, and legal sectors, says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte. With generative AI's emergence, the importance of validating outputs has become even more critical for risk mitigation and governance, she states in an email interview. "While AI systems are becoming more advanced, hallucinations can undermine trust and, therefore, limit the widespread adoption of AI technologies."Primary CausesAI hallucinations are primarily caused by the nature of generative AI and LLMs, which rely on vast amounts of data to generate predictions, Ammanath says. "When the AI model lacks sufficient context, it may attempt to fill in the gaps by creating plausible sounding, but incorrect, information." This can occur due to incomplete training data, bias in the training data, or ambiguous prompts, she notes.Related:LLMs are generally trained for specific tasks, such as predicting the next word in a sequence, observes Swati Rallapalli, a senior machine learning research scientist in the AI division of the Carnegie Mellon University Software Engineering Institute. "These models are trained on terabytes of data from the Internet, which may include uncurated information," she explains in an online interview. "When generating text, the models produce outputs based on the probabilities learned during training, so outputs can be unpredictable and misrepresent facts."Detection ApproachesDepending on the specific application, hallucination metrics tools, such as AlignScore, can be trained to capture any similarity between two text inputs. Yet automated metrics don't always work effectively. "Using multiple metrics together, such as AlignScore, with metrics like BERTScore, may improve the detection," Rallapalli says.Another established way to minimize hallucinations is by using retrieval augmented generation (RAG), in which the model references the text from established databases relevant to the output. "There's also research in the area of fine-tuning models on curated datasets for factual correctness," Rallapalli says.Related:Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. "For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article." Such methods could help detect hallucinations better, she notes.Ammanath believes that detecting AI hallucinations requires a multi-pronged approach. She notes that human oversight, in which AI-generated content is reviewed by experts who can cross-check facts, is sometimes the only reliable way to curb hallucinations. "For example, if using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be easy to identify and the outcomes are lower stakes for the enterprise," Ammanath explains. Yet when it comes to applications that include mission-critical business decisions, error tolerance must be low. "This makes a 'human-in the-loop', someone who validates model outputs, more important than ever before."Related:Hallucination TrainingThe best way to minimize hallucinations is by building your own pre-trained fundamental generative AI model, advises Scott Zoldi, chief AI officer at credit scoring service FICO. He notes, via email, that many organizations are now already using, or planning to use, this approach utilizing focused-domain and task-based models. "By doing so, one can have critical control of the data used in pre-training -- where most hallucinations arise -- and can constrain the use of context augmentation to ensure that such use doesn't increase hallucinations but re-enforces relationships already in the pre-training."Outside of building your own focused generative models, one needs to minimize harm created by hallucinations, Zoldi says. "[Enterprise] policy should prioritize a process for how the output of these tools will be used in a business context and then validate everything," he suggests.A Final ThoughtTo prepare the enterprise for a bold and successful future with generative AI, it's necessary to understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, Ammanath says. "AI hallucinations help to highlight both the power and limitations of current AI development and deployment."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 57 Views
-
WWW.INFORMATIONWEEK.COMNext Steps to Secure Open Banking Beyond Regulatory ComplianceFinal rules from the Consumer Financial Protection Bureau further the march towards open banking. What will it take to keep such data sharing secure?0 Σχόλια 0 Μοιράστηκε 56 Views
-
WWW.INFORMATIONWEEK.COMRefreshing Your Network DR PlanHurricane Helene was a reminder that network DR plans should be up to date. Here is a checklist to be prepared for the next disaster.0 Σχόλια 0 Μοιράστηκε 53 Views
-
WWW.INFORMATIONWEEK.COMAI on the Road: The Auto Industry Sees the PromisePhong Nguyen, Chief AI Officer, FPT SoftwareNovember 8, 20244 Min ReadBrain light via Alamy StockGenerative AI is reshaping the future of the automotive industry. For industry leaders, this is not just some cutting-edge technology, but a strategic enabler poised to redefine the market landscape. With 79% of executives expecting significant AI-driven transformation within the next three years, harnessing GenAI is no longer optional but essential to remain competitive in a rapidly evolving sector.As AI continues to make its mark, it transforms how vehicles are designed, secures them against evolving threats, and enhances the overall driving experience. From enabling cars to anticipate and respond to cyber risks to accelerating innovation in design, and creating more personalized driving experiences, AI is redefining the key aspects of automotive development and usage.Stopping Security BreachesWith the automotive industry undergoing rapid transformation, the cybersecurity risks it encounters are also increasing and becoming more complex. High-profile breaches, such as the Pandora ransomware attack on a major German car manufacturer in March 2022, highlight the urgent need for more advanced security strategies. The attackers compromised 1.4TB of sensitive data, including purchase orders, technical diagrams, and internal emails, exposing vulnerabilities within the sector.AI-driven systems, including predictive and generative models, process vast amounts of data in real-time, making them indispensable for detecting unusual patterns that signal potential attacks. By continuously learning from past threats and dynamic adaptation to emerging risks, AI-driven systems detect intrusions and work alongside rule-based or supervised models to predict outcomes and simulate attack scenarios for training purposes. These include isolating compromised nodes, blocking malicious IP addresses, and mitigating threats before they escalate. For this reason, 82% of IT decision-makers intend to invest in AI-driven cybersecurity within the next two years.GenAI's abilities to generate data and patterns empower organizations to stay ahead of cybercriminals by anticipating attacks before they occur. A prime example is a leading automotive manufacturer that has significantly improved the security of its vehicle-to-everything (V2X) communication systems by leveraging generative models to simulate various network attack scenarios. This approach allows the network's defensive mechanisms to be trained and tested against imminent breaches.By utilizing models such as variational autoencoders (VAEs) and generative adversarial networks (GANs), which can generate synthetic attack data for simulations, the company could mimic various cyberattack scenarios. This allowed them to detect and mitigate up to 90% of simulated attacks during the testing phases, demonstrating a robust improvement in the overall security posture.Redefining Automotive DesignGenerative AI is ushering in a new wave of innovation in automotive architecture, transforming vehicle design with cutting-edge capabilities. By leveraging generative design techniques, AI-driven systems can automatically produce multiple design iterations, enabling manufacturers to identify the most efficient and effective solutions. GenAI design optimizes engineering and aesthetic decisions, helping manufacturers reduce development time and costs by up to 20%, according to Precedence Research, giving companies a competitive edge in expediting time-to-market.Toyota Research Institute has integrated a generative AI tool that enables designers to leap from a text description to design sketches by specifying stylistic attributes such as sleek, SUV-like, and modern. Tackling the challenges where designs frequently fell short of meeting engineering requirements, this tool integrates both aesthetic and engineering requirements. That allows designers and engineers to collaborate more effectively while ensuring that the final designs meet critical technical specifications. By bridging the gap between creative and engineering teams, companies can ensure that final designs meet essential specifications while enhancing both the speed and quality of design iterations, enabling faster and more efficient innovation.A More Connected and Personalized Driver ExperienceOriginal equipment manufacturers are transforming the customer experience with GenAI in an increasingly demanding market. Unlike traditional voice command systems that rely on static and pre-programmed responses, AI-powered voice technology offers dynamic, natural conversations. Integrated into vehicles, GenAI enhances GPS navigation, entertainment systems, and other in-car functionalities, allowing drivers to interact meaningfully with their vehicles AI assistant.Volkswagen, for example, became the first automotive manufacturer to integrate ChatGPT into its voice assistant IDA. This offers drivers an AI-powered system that manages everything from infotainment to navigation and answers general knowledge questions.As GenAI continues to become more advanced, delivering an exceptional driver experience is now a key differentiator for manufacturers looking to stay competitive. Despite the significant advancements in leveraging AI to enhance customer interactions, many original equipment manufacturers (OEMs) struggle to meet customer expectations. A recent Boston Consulting Group study revealed that, while the quality of the car-buying experience is the most critical decision factor for many customers, only 52% of customers say they are completely satisfied with their most recent car-buying experience. This underscores the need for OEMs to refine the integration of AI-driven systems further to enhance both the purchasing and ownership experience.About the AuthorPhong NguyenChief AI Officer, FPT SoftwarePhong Nguyenis FPT Softwares chief artificial intelligence officer. He is an influential leader with vast managerial and technical experience, listed as Top150 AI Executives by Constellation Research 2024. Phong holds a PhD from the University of Tokyo and a master's degree from Carnegie Mellon University.See more from Phong NguyenNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 54 Views
-
WWW.INFORMATIONWEEK.COMHow AI is Reshaping the Food Services IndustryJohn Edwards, Technology Journalist & AuthorNovember 8, 20246 Min ReadPanther Media GmbH via Alamy Stock PhotoThe food services industry might seem an unlikely candidate for AI adoption, yet the market, which includes full-service restaurants, quick-service restaurants, catering companies, coffee shops, private chefs, and a variety of other participants, is rapidly recognizing AI's immediate and long-term potential.AI in food services is poised for widespread adoption, predicts Colin Dowd, industry strategy senior manager at Armanino, an accounting and consulting firm. "As customer expectations shift, companies will be forced to meet their demands through AI solutions that are similar to their competitors," he notes in an email interview.Mike Kostyo, a vice president with food industry consulting firm Menu Matters, agrees. "It's hard to think of any facet of the food industry that isn't being transformed by AI," he observes via email. Kostyo says his research shows that consumers want lower costs --making it easier to customize or personalize a meal -- and faster service. "We tell our clients they should focus on those benefits and make sure they're clear to consumers when they implement new AI technologies."Seeking InsightsOn the research side, AI is being used to make sense out of the data deluge firms currently face. "Food companies are drowning in research and data, both from their own sources, such as sales data and loyalty programs, and from secondary sources," Kostyo says. "It's just not feasible for a human to wade through all of that data, so today's companies use AI to sift through it all, make connections, and develop recommendations."Related:AI can, for example, detect that spicy beverages are starting to catch on when paired with a particular flavor. "So, it may recommend building that combination into a new menu option or product," Kostyo says. It can do this constantly over time, taking into account billions of data points, creating innovation starting positions. "The team can take it from there, filling their pipeline with relevant products and menu items."Data collected from multiple sources can also be used to track customer preferences, providing early insights on emerging flavor trends. "For example, Campbells and Coca-Cola are currently using AI in tandem with food scientists to create new and exciting flavors and dishes for their customers based on insights collected from both internal and external data sources," Dowd says. "This approach can also be applied to restaurants and other locations that rely on recipes."Management and InnovationAI can also optimize inventory management. "AI is being used to determine when to order, and how much inventory a company needs to purchase, by analyzing historical data and current trends," Dowd says. "This allows the restaurant to maintain ideal inventory levels, reduce waste and better ensure that the restaurant always has the necessary ingredients."Related:When used as an innovation generator, AI can inspire fresh ideas. "Sometimes, when you get in that room together to come up with a new menu item or product, just facing down that blank page is the hardest part," Kostyo observes. "You can use AI for some starter ideas to work with." He says he loves to feed outlandish ideas into AI, such as, 'What would a dessert octopus look like?' "It may then develop this really wild dessert, like a chocolate octopus with different-flavored tentacles."Customer ExperienceAI promises to help restaurants provide a consistently positive experience to consumers, says Jay Fiske, president of Powerhouse Dynamics, an AI and IoT solutions provider for major multi-site food service firms, including Dunkin', Arby's, and Buffalo Wild Wings. He notes in an email interview that AI and ML can be used to flag concerning data, indicating potential problems, such as frozen meat going into the oven before it should, or predicting a likely freezer breakdown sometime within the next two weeks. "In these situations, facility managers have time to quickly preempt any issues that could cost them money, as well as their reputations with consumers," he says.Related:Another way AI is transforming the food services industry is by providing more efficient and reliable energy management. "This is important, because restaurants, ghost kitchens, and other food service businesses are extremely energy intensive," Fiske says. Refrigerators, freezers, ovens, dish washers, fryers, and air conditioners all consume massive amounts of power that can be controlled and optimized by AI.Future OutlookThe sky is the limit for food services industry AI, Kostyo states, noting that market players are taking various approaches. Some are excited about AI, and afraid to get left behind, so they're jumping right into these tools, while others are a little more skittish, concerned about ethical and privacy issues.Kostyo urges AI adopters to periodically monitor their customers' AI acceptance level. "In some ways, customers are very open to AI," he says. "Forty-six percent of consumers told us they're already using AI to assist with food decisions in some fashion, such as deciding what to cook or where to eat." Kostyo adds that 59% of surveyed consumers believe that AI can develop a recipe that's just as delicious as anyhuman chef could create.On the other hand, people still often crave a human touch. Kostyo reports that 66% of consumers would still rather have a dish that was created by a human chef. "Consumers frequently push back when they see AI being used in a way that would take a human job."Service FirstKostyo urges the food industry to use AI in ways that will enhance the overall consumer experience. "At the end of the day, we are the hospitality industry, and we need to remember that."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 71 Views
-
WWW.INFORMATIONWEEK.COMGenAIs Impact on CybersecurityGenerative AI adoption is becoming ubiquitous as more software developers include the capability in their applications and users flock to sites like OpenAI to boost productivity. Meanwhile, threat actors are using the technology to accelerate the number and frequency of attacks.GenAI is revolutionizing both offense and defense in cybersecurity. On the positive side, it enhances threat detection, anomaly analysis and automation of security tasks. However, it also poses risks, as attackers are now using GenAI to craft more sophisticated and targeted attacks [such as] AI-generated phishing, says Timothy Bates, AI, cybersecurity, blockchain & XR professor of practice at University of Michigan and former Lenovo CTO. If your company hasnt updated its security policies to include GenAI, its time to act.According to James Arlen, CISO at data and AI platform company Aiven, GenAIs impact is proportional to its usage.If a bad actor uses GenAI, you'll get bad results for you. If a good actor uses GenAI wisely you'll get good results. And then there is the giant middle ground of bad actors just doing dumb things [like] poisoning the well and nominally good actors with the best of intentions doing unwise things, says Arlen. I think the net result is just acceleration. The direction hasn't changed, it's still an arms race, but now it's an arms race with a turbo button.Related:The Threat Is Real and GrowingGenAI is both a blessing and a curse when it comes to cybersecurity.On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats, says Erik Avakian, technical counselor at Info-Tech Research Group. These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes.The incorporation of voice impersonation and personalized content through deepfake attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against, says Avakian. GenAI can and is alsobeing used by adversaries to create advanced malware that adapts to defenses and evades current detection systems.Related:Pillar Securitys recent State of Attacks on GenAI report contains some sobering statistics about GenAIs impact on cybersecurity:90% of successful attacks resulted in sensitive data leakage.20% of jail break attack attempts successfully bypassed GenAI application guardrails.Adversaries require an average of just 42 seconds to execute an attack.Attackers needed only five interactions, on average, to complete a successful attack using GenAI applications.The attacks exploit vulnerabilities at every stage of interaction with GenAI systems, underscoring the need for comprehensive security measures. In addition, the attacks analyzed as part of Pillar Securitys research reveal an increase in both the frequency and complexity of prompt injection attacks, with users employing more sophisticated techniques and making persistent attempts to bypass safeguards.My biggest concern is the weaponization of GenAI -- cybercriminals using AI to automate attacks, create fake identities or exploit zero-day vulnerabilities faster than ever before. The rise of AI-driven attacks means that attack surfaces are constantly evolving, making traditional defenses less effective, says University of Michigans Bates. To mitigate these risks, were focusing on AI-driven security solutions that can respond just as rapidly to emerging threats. This includes leveraging behavioral analytics, AI-powered firewalls, and machine learning algorithms that can predict potential breaches.Related:In the case of deepfakes, Josh Bartolomie, VP of global threat services at email threat and defense solution provider Cofense recommends an out-of-band communication method to confirm the potentially fraudulent request, utilizing internal messaging services such as Slack, WhatsApp, or Microsoft Teams, or even establishing specific code words for specific types of requests or per executive leader.And data usage should be governed.With the increasing use of GenAI, employees may look to leverage this technology to make their job easier and faster. However, in doing so, they can be disclosing corporate information to third party sources, including such things as source code, financial information, customer details [and] product insight, says Bartolomie. The risk of this type of data being disclosed to third party AI services is high, as the totality of how the data is used can lead to a much broader data disclosure that could negatively impact that organization and their products [and] services.Casey Corcoran, field chief information security officer at cybersecurity services company Stratascale -- an SHI company, says in addition to phishing campaigns and deep fakes, bad actors are using models that are trained to take advantage of weaknesses in biometric systems and clone persona biometrics that will bypass technical biometric controls.[M]y two biggest fears are: 1) that rapidly evolving attacks will overwhelm traditional controls and overpower the ability of humans to distinguish between true and false; and 2) breaking the need to know and overall confidentiality and integrity of data through unmanaged data governance in GenAI use within organizations, including data and model poisoning, says Corcoran.Tal Zamir, CTO at advanced email and workspace security solutions provider Perception Point warns that attackers exploit vulnerabilities in GenAI-powered applications like chatbots, introducing new risks, including prompt injections. They also use the popularity of GenAI apps to spread malicious software, such as creating fake GenAI-themed Chrome extensions that steal data.Attackers leverage GenAI to automate tasks like building phishing pages and crafting hyper-targeted social engineering messages, increasing the scale and sophistication of attacks, says Zamir. Organizations should educate employees about the risks of sharing sensitive information with GenAI tools, as many services are in early stages and may not follow stringent security practices. Some services utilize user inputs to train models, risking data exposure. Employees should be mindful of legal and accuracy issues with AI-generated content, and always review it before sharing, as it could embed sensitive information.Bad actors can also use GenAI to identify zero days and create exploits. Similarly, defenders can also find zero days and create patches, but time is the enemy: hackers are not encumbered by rules that businesses must follow.[T]here will likely still be a big delay in applying patches in a lot of places. Some might even require physically replacing devices, says Johan Edholm, co-founder, information security officer and security engineer at external attack surface management platformprovider Detectify. In those cases, it might be quicker to temporarily add things between the vulnerable system and the attacker, like a WAF, firewall, air gapping, or similar, but this won't mitigate or solve the risk, only reduce it temporarily. "Make Sure Company Policies Address GenAIAccording to Info-Tech Research Groups Avakian, sound risk management starts with general and AI specific governance practices that implement AI policies.Even if our organizations have not yet incorporated GenAI technologies or solutions yet into the environment, it is likely that our own employees have experimented with it or are using AI applications or components of it outside the workplace, says Avakian. As CISOs, we need to be proactive and take a multi-faceted approach to implementing policies that account for our end-user acceptable use policies as well as incorporating AI reviews into our risk assessment processes that we already have in place. Our security policies should also evolve to reflect the capabilities and risks associated with GenAI if we don't have such inclusions in place already.Those policies should span the breadth of things in GenAI usage, ranging from AI training that covers data protection to monitoring to securing new and existing AI architectural deployments. Its also important that security, the workforce, privacy teams and legal teams understand AI concepts, including the architecture, privacy and compliance aspects so they can fully vet a solution containing AI components or features that the business would like to implement.Implementing these checks into a review process ensures that any solutions introduced into the environment will have been vetted properly and approved for use and any risks addressed prior to implementation and use, vastly reducing risk exposure or unintended consequences, says Avakian. Such reviews should incorporate policy compliance, access control reviews, application security, monitoring and associated policies for our AI models and systems to ensure that only authorized personnel can access, modify or deploy them into the environment. Working with our legal teams and privacy officers can help ensure any privacy and legal compliance issues have been fully vetted to ensure data privacy and ethical use.What if your companys policies have not been updated yet? Thomas Scanlon, principal researcher at Carnegie Mellon University's Software Engineering Institute recommends reviewing exemplar policies created by professional societies to which they belong or consulting firms with multiple clients.The biggest fear for GenAIs impact on cybersecurity is that well-meaning people will be using GenAI to improve their work quality and unknowingly open an attack vector for adversaries, says Scanlon. Defending against known attack types for GenAI is much more straightforward than defending against accidental insider threats.Technology spend and risk managementplatform Flexera established a GenAI policy early on, but it became obvious that the policy was quickly becoming obsolete.GenAI creates a lot of nuanced complexity that requires fresh approaches for cybersecurity, says Conal Gallagher, CISO & CIO of Flexera. A policy needs to address whether the organization allows or blocks it. If allowed, under what conditions? A GenAI policy must consider data leakage, model inversion attacks, API security, unintended sensitive data exposure, data poisoning, etc. It also needs to be mindful of privacy, ethical, and copyright concerns.To address GenAI as part of comprehensive risk management, Flexera formed an internal AI Council to help navigate the rapidly evolving threat landscape.Focusing efforts there will be far more meaningful than any written policy. The primary goal of the AI Council is to ensure that AI technologies are used in a way that aligns with the companys values, regulatory requirements, ethical standards and strategic objectives, says Gallagher. The AI Council is comprised of key stakeholders and subject matter experts within the company. This group is responsible for overseeing the development, deployment and internal use of GenAI systems.Bottom LineGenAI must be contemplated from end user, corporate risk and attacker perspectives. It also requires organizations to update policies to include GenAI if they havent done so already.The risks are generally two-fold: intentional attacks and inadvertent employee mistakes, both of which can have dire consequences for unprepared organizations. If internal policies have not been reviewed with GenAI specifically in mind and updated as necessary, organizations open the door to attacks that could have been avoided or mitigated.0 Σχόλια 0 Μοιράστηκε 68 Views
-
WWW.INFORMATIONWEEK.COMThreatLocker CEO Talks Supply Chain Risk, AIs Cybersecurity Role, and FearShane Snider, Senior Writer, InformationWeekNovember 7, 20246 Min ReadPictured: ThreatLocker CEO Danny Jenkins.Image provided by ThreatLockerIts no secret that cybersecurity concerns are growing. This past year has seen massive breaches, such as the breach of National Public Data (with 2.7 billion records stolen), and several large breaches of Snowflake customers such as Ticketmaster, Advance Auto Parts and AT&T. More than 165 companies were impacted by the Snowflake-linked breaches alone, according to a Mandiant investigation.According to CheckPoint research, global cyber-attacks increased by 30% in the second quarter of 2024, to 1,636 weekly attacks per organization. An IBM report says the average cost of a data breach globally rose 10% in 2024, to $4.8 million.So, its probably not that surprising that Orlando, Fla.-based cybersecurity firm ThreatLocker has ballooned to 450 employees since its 2017 launch. InformationWeek caught up with ThreatLocker CEO Danny Jenkins at the Gartner IT Symposium/XPO in Orlando last month.(Editors note: The following interview is edited for clarity and brevity.)Can you give us a little overview on what you were talking about at the event?What were talking about is that when youre installing software on your computer, that software has access to everything you have access to, and people often dont realize if they download that game, and there was a back door in that game, if there was some vulnerability from that game, it could potentially steal my files, grant someone access to my computer, grab the internet and send data. So, what we were really talking about was the supply chain risk. The biggest thing is vulnerabilities: The things a vendor didnt intend to do, but accidentally granted someone access to your data. You can really enhance your security through sensible controls and limiting access to those applications rather than trying to find every bad thing in the world.Related:AI has been the major reoccurring theme throughout the symposium. Can you talk a little about the way we approach these threats and how that is going to change as more businesses adopt emerging technologies like GenAI?Whats interesting is that were actually doing a session on how to create successful malware, and were going to talk about how were able to use AI to create undetectable malware versus the old way. If you think about AI, and you think about two years ago, if you wanted to create malware, there were a limited number of people in the world that could do that -- youd have to be a developer, youd have to have some experience, youd have to be smart enough to avoid protections. That pool of people was quite small. Today, you can just ask ChatGPT to create a program to do whatever you want, and it will spit out the code instantly. The amount of people that have the ability to create malware has now drastically increased the way to defend against that is to change the way you think about security. The way most companies think about security now is theyre looking for threats in their environment -- but thats not effective. The better way of approaching security is really to say, "Im just going to block what I dont need, and I dont care if its good and I dont care if its bad. If its not needed in my business, Im going to block it from happening."Related:As someone working in security, is the pace of AI adoption in enterprise a concern?I think the concern is the pace and the fear. AI has been around for a long time. What were seeing the last two years is generative AI and thats whats scaring people. If you think about self-driving cars, you think about the ability of machine learning, the ability to see data and manipulate and learn from that data. Whats scary is that the consumer is now seeing AI that produces and before it was always stuff in the background that you never really thought about. You never really thought about how your car is able to determine if somethings a trash can or if its a person. Now this thing can draw pictures and it can write documents better than I do, and create code. Am I worried about AI taking over the world from that perspective? No. But I am concerned about the tool set that weve now given people who may not be ethical.Related:Before, if you were smart enough to write successful malware, at least in the Western Hemisphere, youre smart enough to get a job and youre not going to risk going to jail. The people who were creating successful malware before, or successful cyber-attacks, were people in countries where there were not opportunities, like Russia. Now, you dont need to be smart enough to create successful cyber-attacks, and thats what concerns me. If you give someone who doesnt have capacity to earn a living access to tools that can allow them to steal data, the path they are going to follow is cyber crime. Just like other crime, when the economy is down and people dont have job, people steal and crime goes up. Cyber crime before was limited to people who had an understanding of technology. Now, the whole world will have access and thats what scares me -- and GenAI has facilitated that.How do you see your business changing in the next 5-10 years because of AI adoption?Ultimately, it changes the way people think about security, to where they have to start adopting more zero-trust approaches and more restrictive controls in their environment. Thats how it has to go -- there is no alternative. Before, there was a 10% chance you were going to get damaged by an attack, now its an 80% chance.If youre the CIO of an enterprise, how should you be looking at building out these new technologies and building on these new platforms? How should you be thinking about the security side of it?At the end of the day, you have to consider the internal politics of the business. And weve gone from a world where IT people and CIOs, who often come from introverted backgrounds where they dont communicate with boards, were seen as the people that make our computers work, and not the people who protect our business now the board is saying we have to bring a security department. I feel like if youre the CIO, you should be leading the conversation with your security team as a CIO, you should be driving that.What was one of your biggest takeaways from the event overall?I think the biggest thing Im seeing in the industry is fear is increasing, and rightly so. Were seeing more people willing to say, "I need to solve my problem. I know were sitting ducks right now." Thats because were on the technology side and we live and breathe this stuff. But what we dont necessarily always understand is what the customer perspective and customer viewpoint is and how do we solve their problems.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 63 Views
-
WWW.INFORMATIONWEEK.COMHow to Find the Right CISOGreat CISOs are in short supply, so choose wisely. Here are five ways to make sure you've made the right pick.0 Σχόλια 0 Μοιράστηκε 67 Views
-
WWW.INFORMATIONWEEK.COM5 Ways to Overcome Digital Transformation Culture ShockMegan Williams, VP, Global Technology Strategy and Transformation, TransUnionNovember 6, 20244 Min ReadFederico Caputo via Alamy StockAs organizations strive to meet their goals, integrating digital technology into analytics, artificial intelligence and machine learning, and cloud migration has become essential. The end game is to transform businesses operations, share information, and deliver customer value. While digital transformation promises increased efficiency, productivity, and reduced costs, its success fundamentally depends on people. Neglecting the human aspect of transformation is a recipe for failure from the outset.A BCG study on digital transformation found that 90% of companies focusing on culture during their transformation journey experienced solid financial performance, compared to 17% that didnt. Despite projections that global spending on digital transformation will reach $3.4 trillion by 2026, theres a high failure rate -- around 70%, according to McKinsey. Much of this failure can be attributed to organizational culture shock, where employees react negatively to sudden changes.In 1955, Sverre Lysgaard developed a model describing how individuals adapt to a new culture, beginning with a honeymoon phase, followed by culture shock, then adjustment, and finally adaptation. This process mirrors what happens to employees during digital transformation. Companies must invest in addressing culture shock to ensure the success of their digital initiatives.Related:We recently embarked on a significant digital transformation with the introduction of our solution enablement platform. This platform unites various data and analytic assets built for risk management, marketing, and fraud prevention into one unified environment. This transformation enhances our ability to provide a more accurate picture of consumers across various use cases. From my experience rolling out this platform, Ive identified five key strategies companies can use to navigate digital transformation successfully and avoid employee culture shock.1. Foundation settingIts essential to communicate your vision and strategy. A well-defined roadmap that outlines the steps to achieve transformation goals is crucial. McKinsey reports that organizations with a clear change management strategy are six times more likely to succeed. Personalizing the vision for each employee ensures they believe in the transformation and actively participate in it.2. Employee training and educationTraining is vital for engaging employees and advancing their careers. Yet only 56% of organizations report expanding training on digital tools and new processes, according to PwC. At our company, we incentivize employees to complete training programs that enhance their skills, which leads to a more engaged workforce. Employees are encouraged to think about the skills they want to develop for their future, ensuring that our digital transformation also benefits their personal career growth.Related:A significant focus of our training has been on our solution enablement platform. Weve curated specific training for employees, including certifications, across the organization. This approach encourages long-term career development while promoting a deeper understanding of new technologies.3. Be transparent and share progressFrequent updates on successes and challenges foster trust and authenticity. Organizations should openly communicate any changes to the roadmap or strategy. At my company, we hold regular meetings where we showcase both the progress and the hurdles we face during our technology evolution. Integration is a crucial theme; we highlight how different teams benefit from the work.4. Embrace learning and failuresEncouraging a culture that views failure as a learning opportunity fosters innovation. Open lines of communication allow employees to share issues and contribute to continuous improvement. This helps employees feel secure enough to try new things and become active participants in the transformation.Related:At our company, we conduct regular retrospectives of our planned releases. When things dont go as expected, we focus on what can be learned, not the failure itself. This feedback loop is shared transparently, providing valuable insights for the entire team and fostering a culture of continuous improvement.5. Find championsToo often, change management is reduced to sending out emails or presentations. While these methods are helpful, true transformation requires more personal involvement. Identifying champions within the organization can significantly boost morale and support. These champions dont need to be formal leaders but are individuals who believe in transformation and help guide their peers through the process.Recently, our enterprise capabilities marketing and investor relations teams met with our engineers to better understand the benefits of our solution enablement platform. They became champions of the transformation and shared their enthusiasm with key stakeholders, which in turn had a positive impact on investors.ConclusionDigital transformation offers tremendous potential, but it comes with inherent challenges. To succeed, organizations must place people at the heart of the process through training, transparent communication, and fostering a culture that embraces learning from failures. Companies can mitigate culture shock and achieve their transformation goals by following these five strategies.About the AuthorMegan WilliamsVP, Global Technology Strategy and Transformation, TransUnionMegan Williams is an innovative leader with over 20 years experience leading global, multi-year transformations and implementing large, complex program delivery in fast-paced technical industries. She combines a hands-on, forward-thinking approach with an extensive background in IT strategic alignment, process re-engineering, budgeting and forecasting, and translation of regulatory requirements to drive success.With over 20 years experience leading diverse, cross-functional teams in the UK, US, South Africa and Europe, Megan is adept at partnering with international organizations to release global products through new end-to-end product development cycles in complex, heavily matrixed environments.See more from Megan WilliamsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 65 Views
-
WWW.INFORMATIONWEEK.COMLetting Neurodiverse Talent Shine in CybersecurityApproximately 15% to 20% of people are neurodivergent, and that percentage could be even higher in STEM fields. Neurodiversity is a broad term that includes many different conditions: autism spectrum disorder (ASD); attention-deficit/hyperactivity disorder (ADHD); and dyslexia, to name just a few.As cybersecurity stakeholders continue to discuss filling the talent gap and tackling todays security challenges, neurodiverse talent is a valuable resource. But attracting and working with this talent requires leaders to recognize the different needs of neurodivergent people and to foster work environments that make the most of their skills.Neurodiversity as an AssetMany major companies, such as Microsoft and SAP, recognize the value of neurodiverse talent and have formal recruiting programs. Jodi Asbell-Clarke, PhD, heard firsthand from companies with these kinds of hiring initiatives as she conducted research for her book on teaching neurodivergent people in STEM.I expected to hear something like, Oh, the CEOs nephew was autistic, and we wanted to do the right thing. I expected to hear things about philanthropy and equity, and that w as not what I heard at all, Asbell-Clarke, a senior leader and research scientist with TERC, a nonprofit focused on advancing STEM education, told InformationWeek. They were saying it's because the talent. We consider neurodiversity in our workforce our competitive advantage. These are the most persistent and creative and systematic problem solvers.Related:How can that talent be put to work in the cybersecurity workforce?Ian Campbell was diagnosed with major depressive disorder and generalized anxiety early in his life. Then, at the start of the pandemic, he was diagnosed as autistic. Cybersecurity was not his first career. He was providing tech support for the US House of Representatives before he made the switch to security. Currently, he is a senior security operations engineer at DomainTools, a domain research service company.Throughout his career, Campbell has found hyperfocus to be one of his strengths. Scrolling through tens of thousands of things, of log files, hyper-focusing on that, and being able to intuitively pattern match or detect pattern deviations was a huge benefit in both tech support and security, he says.Megan Roddie-Fonseca, senior security engineer at cloud monitoring as a service company Datadog, is autistic and has ADHD. She shares how productivity is one of her biggest strengths.I find efficient ways to do things, she says. I use that efficiency to be able to tackle tasks in a way that some people might not get the same amount of work done in the same amount of time.Related:Challenges in the WorkplaceWhile awareness of neurodiversity, and the nuance within that very broad term, is growing, there are still plenty of potential challenges in the workplace.Neurodivergent people face the tricky question of disclosure. Should they tell their managers and coworkers about their diagnoses? Neurodiversity is more openly discussed, but that doesnt mean there arent people who will misunderstand or react to disclosure negatively.A lot of people I know who are neurodivergent haven't come out as neurodivergent because they don't want to be seen that way, says Campbell. They don't want, frankly, their careers limited by someone who has a poor view of neurodivergence.The decision to conceal neurodivergent traits, known as masking, can be a difficult undertaking.Masking is basically suppressing your own neurodivergent urges and needs for the sake of function in a world that's not built for us, and masking is incredibly tiring, says Campbell.The decision to disclose or not is a personal choice, one that is likely influenced by the level of support people can expect from a workplace.Related:The way people communicate at work, for example, can potentially lead to misunderstandings. One study using the classic game of telephone -- a group passes information to one another down a line of several people -- illustrates these potential challenges.The study broke its subjects into three groups of people: autistic, non-autistic, and mix of both. The first two groups exhibited the same skill level relating to information transfer. But communication problems arose in the mixed group.In a cybersecurity workplace, neurotypical and neurodiverse people are going to need to find ways to communicate with one another effectively. Some work environments will foster opportunities to learn how to best build those communication pathways. Some wont.The physical aspects of the work environment can also be a challenge for neurodivergent people who have sensory processing issues. The lighting and sound levels of an office, for example, can result in sensory overwhelm for some people.Hiring and Supporting Neurodiverse TalentEnterprises can attract neurodiverse talent through formal hiring programs or by working with external organizations, such as Specialisterne. Regardless of the approach, partnered or solo, hiring managers and cybersecurity team leaders need to evaluate and adapt their strategies.During the interview process, Asbell-Clarke recommends matching that short experience to the work you hope to see in the actual work environment. If you are hiring someone who will be conducting highly detailed work under time constraints, mirror that process when evaluating candidates.If you want to see people's best problem-solving, give them the time and space to solve a task and then ask them about how they did it, she says.In the cybersecurity work environment, managers will find that getting the best work from their neurodivergent workers will require varying approaches.Neurodiversity is this massive spectrum, says Jackie McGuire, senior security strategist at Cribl, a unified data management platform. It can be confusing as a manager because you can have two team members who are on exact opposite ends of that spectrum who need completely polar opposite things.For example, one neurodivergent person may thrive in a structured environment, while another may do their best work with a high degree of freedom. Additionally, the ways neurodivergent people best receive and respond to feedback can differ.Taking that nuanced approach to management can not only benefit neurodivergent works but cybersecurity teams as a whole. Asbell-Clarke offers some questions that managers can ask their workers.What are the conditions that will make you the best problem solver? What do you need to have your talent shine? she says. Ask that of everyone, not just the neurodivergent.Direct, clear communication is one of the most valuable strategies for empowering cybersecurity teams with both neurodivergent and neurotypical people. For example, teams can commit to keeping clear notes and highlighting action items from meetings to ensure everyone is on the same page.Creating the kind of environment that is responsive to the different needs of its employees is an iterative process. Over time, workplaces can become more supportive of neurodiverse talent and encourage them to do their best work.Employers can encourage their neurodivergent learners to unmask by removing any stigma that may come along with that. Not only is it about adapting the workplace, it's also about adapting the culture, says Asbell-Clarke.Neurodiversity and Navigating the WorkplaceHow can neurodivergent people play a role in shaping cybersecurity workplaces? People, such as McGuire, Campbell, Roddie-Fonseca, who speak up can increase awareness of neurodiversity and its tremendous value to employers. But not everyone is in the position to be an advocate.Unfortunately, the people who would benefit the most from accommodations are oftentimes also the people the least likely to ask for them or the least able to initiate that conversation, McGuire points out.But that doesnt mean nothing can be done. Recognizing your neurodiversity can be an important step forward. Do what you can to educate yourself more on what neurodiversity is and the way it manifests and the types of support you can provide yourself, McGuire recommends.Connecting with other neurodivergent people, either at work or industry events, can be a helpful way to discuss navigating the workplace.Some companies have formal neurodiversity working groups. McGuire, who has ADHD and autism, helped co-found a neurodiversity employee resource group. One of our initial focuses is what can we do to help neurodiverse people better advocate for themselves at work, she shares.If a company doesnt have one of these groups, look for ways to create an informal one. The way neurodiversity manifests in different people, if you get more than a couple of neurodiverse people together you will get one of them who is a great advocate, says McGuire.Roddie-Fonseca didnt consider herself much of an advocate until her manager suggested she submit a talk about her experience as a neurodivergent individual in cybersecurity at the hacker and security conference Defcon.Attending cybersecurity industry can events can help neurodivergent people connect and discuss their workplace experiences and be a valuable tool for career development. There's a lot of competition for jobs at times and who you know does make an impact, says Roddie-Fonseca.Accommodations can be an important way to ensure neurodiverse people can do their best work but having that conversation can be uncomfortable for both the people asking and the people listening.Everybody's afraid of accommodations, but if we want to pull the amazing strengths from these neurodiverse people we have to be willing to invite things like accommodations and be flexible with them, says Campbell.Building a career in cybersecurity, or any other industry, takes time and often trial and error. There is no guarantee that a workplace will be the right fit.Understand that there are organizations and managers out there that will support you and will value you for who you are, not who they want you to be, says Roddie-Fonseca. Continue pursuing the opportunities to find a place where you will be happy and comfortable and thrive versus accepting a place that doesn't truly value you for the strengths you do have.0 Σχόλια 0 Μοιράστηκε 66 Views
-
WWW.INFORMATIONWEEK.COMThe Current Top AI EmployersJohn Edwards, Technology Journalist & AuthorNovember 6, 20246 Min Readtanit boonruen via Alamy Stock PhotoWhile the unemployment rate for IT professionals rose to 6% in August, up from 5.6% the prior month, the situation is far brighter for AI experts.The AI job market has shown resilience and growth, especially in the first half of 2024, says Antti Karjalainen, an analyst with WilsonHCG, a global executive search and talent consulting firm. "Despite some fluctuations, the demand for AI professionals remains robust, driven by increased investments in AI technologies and projects," he observes in an online interview.Amazon currently leads the pack with 1,525 AI-related employees, primarily operating in the e-commerce and cloud computing sectors, according to data from WilsonHCGs talent intelligence and labor market analytics platform. Meta follows closely with 1,401 employees, while Microsoft is next with 1,253 employees in AI-related roles. "As expected, Apple and Alphabet also have significant numbers with 1,204 and 970 employees, respectively," Karjalainen notes.TalentNeuron, a global labor market analytics provider, breaks down the market somewhat differently. "Globally, the top five AI employers are Google, Capital One, Amazon, ByteDance, and TikTok," says David Wilkins, the firm's chief product and marketing officer. "Of note, Amazon saw a 519% increase in AI job postings year-over-year, and Google saw a 367% increase," he observes in an online interview. "Out of the top 20 AI employers, Reddit saw the largest year-over-year increase at 1,579%."Related:While the US is a strong market for AI talent, there's a significant shortage of AI specialists relative to the growing demand, Wilkins says. "So, companies, Google among them, have expanded overseas for talent." TalentNeuron's latest report on tech talent hubs found that demand growth is highest in emerging, lower-cost markets, such as the Indian cities of Pune and Hyderabad, as organizations seek to strategically place AI capabilities.Sought-After SkillsThe most sought-after skills in AI job postings, according to WilsonHCG data, include deep learning, machine learning model development, computer vision, generative AI, and natural language processing (NLP), Karjalainen says. "These skills are crucial for developing advanced AI systems and applications." He adds that advanced algorithm development, model deployment and productionization (the process of turning a prototype into something that can be mass-produced), and AI-specific programming languages, such as TensorFlow, PyTorch, and Keras, are also highly valued by employers.Related:Many employers also value proficiency in programming languages, such as Python, MATLAB, C++, and Java, as well as data analysis and statistical modeling talents. "These skills are foundational for any AI-related role and are necessary for developing, testing, and deploying AI models," Karjalainen says. Having the ability to work with large datasets, perform data mining, and apply statistical techniques is also crucial, he notes. "Employers are looking for candidates who can not only build AI models but also interpret and analyze the results to drive business decisions."Top FieldsWilsonHCG finds that the computer software industry leads with 4,135 AI professionals, indicating a strong demand for AI talent in software development and related services. Following closely is the IT and services sector, which employs 3,304 AI professionals. "This sector includes companies that provide IT consulting, system integration, and managed services, all of which are increasingly incorporating AI into their offerings," Karjalainen says.With 2,176 individuals working in the area, research organizations also have a significant number of AI professionals. This sector includes academic institutions, research labs, and private research firms focused on advancing AI technologies, Karjalainen says. Financial services, with 819 AI professionals, is yet another key sector, as banks, insurance companies and investment firms leverage AI for risk management, fraud detection, and customer service. Meanwhile, the internet industry, which includes companies providing online services and platforms, employs 635 AI professionals, reflecting the importance of AI in enhancing user experiences and optimizing operations.Related:Karjalainen says that other fields with significant AI employment include higher education (444 professionals), biotechnology (384 professionals), and mechanical or industrial engineering (378 professionals). The hospital and health care sector employs 324 AI professionals, highlighting the growing use of AI in medical diagnostics, treatment planning, and patient care. The automotive industry, with 320 AI professionals, is also a key player, particularly in the development of autonomous vehicles and advanced driver-assistance systems. Other important fields employing AI professionals include management consulting, electrical/electronic manufacturing, and semiconductors.Salary TrendsWilsonHCG data shows that AI job postings consistently offer higher salaries than non-AI IT postings. For instance, in July 2024, the average advertised salary for AI jobs was $166,584, while for non-AI IT jobs the average was $110,005. The comparison represents a difference of $56,579, or 51.4%.Looking at the annual median salary, AI jobs offer $150,018 compared to $108,377 for non-AI IT jobs, resulting in a difference of $41,641, or 38.4%, Karjalainen says. "This trend is consistent across various months, with AI job salaries consistently outpacing those of non-AI IT jobs by a substantial margin."Wilkins reports that top US AI employers offer a median base salary of $183,250, according to TalentNeuron salary data. The median base salary for US AI jobs overall is $143,000. In comparison, the US Bureau of Labor Statistics in May 2023 reported a median annual wage of $104,420 for computer and information technology occupations.Overall, the data suggests that top AI employers generally pay more than other employers, particularly in the IT sector, Karjalainen says. "This higher compensation reflects the specialized skills and expertise required for AI roles, as well as the high demand for AI talent in the job market"Talent HubsAccording to WilsonHCG statistics, California's San Francisco-Oakland-Hayward, metro area has 10,740 AI professionals, making it the leading AI talent hub. In second place with 5,422 AI professionals is the New York-Newark-Jersey City-NY-NJ-PA region. "This area is a significant center for finance, media, and technology, attracting a diverse range of AI talent," Karjalainen notes. The Seattle-Tacoma-Bellevue, Washington metro area, with 3,139 AI professionals, is another key location, driven by the presence of major tech companies and a strong innovation culture.About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 68 Views
-
WWW.INFORMATIONWEEK.COMHow Quantum Machine Learning WorksLisa Morgan, Freelance WriterNovember 5, 20247 Min ReadKittipong Jirasukhanont via Alamy Stock As quantum computing continues to advance, so too are the algorithms used for quantum machine learning, or QML. Over the past few years, practitioners have been using variational noisy intermediate-scale quantum (NISQ) algorithms designed to compensate for noisy computing environments.There's a lot of machine learning algorithms in that vein that run in that kind of way. You treat your quantum program as if it was a neural network, says Joe Fitzsimons, founder and CEO Horizon Quantum Computing, a company building quantum software development tools. You write a program that has a lot of parameters in it that you don't set beforehand, and then you try to tune those parameters. People call these quantum neural networks. You also have variational classifiers and things like that that fall into that category.One can also take an existing classical machine learning model and try to accelerate its computation using a quantum computer. Noise is a challenge, however, so error correction is necessary. Another requirement is quantum random access memory (QRAM, which is the quantum equivalent of RAM).If we can get lower noise quantum computers, if we can start building the RAM, then there's really enormous potential for quantum computers to accelerate a classical model or a quantum native model, says Fitzsimons. You can play with the variational algorithms today, absolutely, but achieving the more structured algorithms and getting to error-corrected quantum random access memory is five years and several Nvidia hardware generations away.Related:QML Needs to MatureWhile quantum computing is not the most imminent trend data scientists need to worry about today, its effect on machine learning is likely to be transformative.The really obvious advantage of quantum computing is the ability to deal with really enormous amounts of data that we can't really deal with any other way, says Fitzsimons. We've seen the power of conventional computers has doubled effectively every 18 months with Moore's Law. With quantum computing, the number of qubits is doubling about every eight to nine months. Every time you add a single qubit to a system, you double its computational capacity for machine learning problems and things like this, so the computational capacity of these systems is growing double exponentially.Quantum machines will allow organizations to model and understand complex systems in a computational way, and the potential use cases are many, ranging from automotive and aerospace to energy, life sciences, insurance, and financial services to name a few. As the number of qubits rises, quantum computers can handle increasingly complex models.Related:Joe Fitzsimons, Horizon Quantum ComputingWith classical machine learning, you take your model and you test it against real-world data, and that's what you benchmark off, says Fitzsimons. Quantum computing is only starting to get towards that. It's not really there yet, and that's whats needed for quantum machine learning to really take off, you know, to really become a viable technology, we need to [benchmark] in the same way that the classical community has done, and not just single shots on very small data sets. A lot of quantum computing is reinventing what has already been done in the classical world. Machine learning in in the quantum world, has a long way to go before we really know what its limits and capabilities are.Whats Happening With Hybrid ML?Classical ML isnt practical for everything, and neither is QML. Classical ML is based on classical AI models and GPUs while quantum machine learning (QML) uses entirely different algorithms and hardware that take advantage of properties like superposition and entanglement to boost efficiency exponentially, says Romn Ors, Ikerbasque research professor at DIPC and chief scientific officer of quantum AI company Multiverse Computing.Related:Classical systems represent data as binary bits: 0 or 1. With QML, data is represented in quantum states. Quantum computers can also produce atypical patterns that classical systems cant produce efficiently, a key task in machine learning, says Ors.Classical ML techniques can be used to optimize quantum circuits, improve error-correcting codes, analyze the properties of quantum systems and design new quantum algorithms. Classical ML methods are also used to preprocess and analyze data that will be used in quantum experiments or simulations. In hybrid experiments, todays NISQ devices work on the parts of the problem most suited to the strengths of quantum computing while classical ML handles the remaining parts.Quantum-inspired software techniques can also be used to improve classical ML, such as tensor networks that can describe machine learning structures and improve computational bottlenecks to increase the efficiency of LLMs like ChatGPT.Its a different paradigm, entirely based on the rules of quantum mechanics. Its a new way of processing information, and new operations are allowed that contradict common intuition from traditional data science, says Ors. Because of the efficient way quantum systems handle information processing, they are also capable of manipulating complex data to represent complex data structures and their correlations. This could improve generative AI by reducing energy and compute costs as well as increasing the speed of the drug discovery process and other data-intensive research. QML also could be used to develop new types of neural networks that use quantum properties that significantly improve inference, explainability, and training efficiency.Theres a lot of innovation happening at various levels to solve various pieces of all things quantum, including system design, environmental optimization, new hardware and software.Romn Ors, Multiverse ComputingIn addition to developing better quantum hardware to run QML, people are also exploring how to implement hybrid systems that combine generative AI modules, such as transformers, with quantum capabilities, says Ors.Like classical ML, QML isnt a single thing.As with other aspects of quantum computing, there are different versions of quantum machine learning. These days, what most people mean by quantum machine learning is otherwise known as a variational quantum algorithm, says Stefan Leichenauer, VP of engineering at Sandbox AQ. This means that quantum computation depends on a whole set of numerical parameters, and we have to adjust those parameters until the computation solves a problem for us. The situation is exactly analogous to that of classical machine learning, where we have neural networks that depend on a set of parameters, namely the weights and biases. Adjusting those parameters happens through training, and that is the same between classical and quantum machine learning.Because quantum machines are small and error-prone, most development of QML algorithms is done by simulating a quantum device using a classical computer. The problem with that is that experiments are limited to small instances of problems, which means that performance on realistic problem sizes remains unknown.Quantum machine learning is most likely to be useful on problems which are natively quantum. This means problems that involve modeling complex quantum phenomena, such as exotic materials. Even in that domain, the jury is still out on quantum machine learning and its usefulness, says Leichenauer. The really exciting quantum algorithms are the so-called fault-tolerant algorithms, which require large, fully error-corrected quantum computers to execute. No one knows if quantum computers will be practically useful before they reach that scale and level of sophistication, but quantum machine learning algorithms are the best idea that people have had that might end up being useful sooner. It still might turn out that quantum machine learning is not practically useful, and we will have to wait for full fault-tolerance before quantum computers take off.Read more about:Quantum ComputingAbout the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Σχόλια 0 Μοιράστηκε 68 Views
-
WWW.INFORMATIONWEEK.COMIranian Threat Actors Ramp Up Ransomware, Cyber ActivityThis summer, the Federal Bureau of Investigation (FBI), Cybersecurity and Infrastructure Security Agency (CISA), and the Department of Defense Cyber Crime Center (DC3) released a joint advisory on Iran-based threat actors and their role in ransomware attacks on organizations in the US and other countries around the globe.With the US presidential election coming to a close, nation state activity from Iran could escalate. In August, Iranian hackers compromised Donald Trumps presidential campaign. They leaked compromised information and sent stolen documents to people involved in Joe Bidens campaign, CNN reports.What are some of the major threat groups associated with Iran, and what do cybersecurity stakeholders need to know about them as they continue to target US organizations and politics?Threat GroupsA number of advanced persistent threat (APT) groups are affiliated with the Islamic Revolutionary Guard Corps (IRGC), a branch of the Iranian armed forces. [Other] relatively skilled cyber threat actor groups maintain arms distance length from the Iranian government, says Scott Small, director of cyber threat intelligence at Tidal Cyber, a threat-informed defense company. But they're operating pretty clearly on behalf [of] or aligned with the objectives of the Iranian government.Related:These objectives could be espionage and information collection or simply disruption. Hack-and-leak campaigns, as well as wiper campaigns, can be the result of Iranian threat actor activity. And as the recent joint advisory warns, these groups can leverage relationships with major ransomware groups to achieve their ends.Look at the relationships [of] a group like Pioneer Kitten/Fox Kitten. They're partnering and collaborating with some of the world's leading ransomware groups, says Small. These are extremely destructive malware that have been extremely successful in recent years at disrupting systems.The joint advisory highlights Pioneer Kitten, which is also known by such names as Fox Kitten, Lemon Sandstorm, Parisite, RUBIDIUM, and UNC757, among others. The FBI has observed these Iranian cyber actors coordinating with groups like ALPHV (also known as BlackCat), Ransomhouse, and NoEscape. The FBI assesses these actors do not disclose their Iran-based location to their ransomware affiliate contacts and are intentionally vague as to their nationality and origin, according to the joint advisory.Many other threat groups affiliated with Iran have caught the attention of the cybersecurity community. In 2023, Microsoft observed Peach Sandstorm (also tracked as APT33, Elfin, Holmium, and Refined Kitten) attempting to deliver backdoors to organizations in the military-industrial sector.Related:MuddyWater, operating as part of Irans Ministry of Intelligence and Security (MOIS), has targeted government and private sector organizations in the oil, defense, and telecommunications sectors.TTPsThe tactics, techniques, and procedures (TTPs) leveraged by Iranian threat actor groups are diverse. Tidal Cyber tracks many of the major threat actors; it has an Iran Cyber Threat Resource Center. Small found the top 10 groups his company tracks were associated with approximately 200 of the MITRE ATT&CK techniques.Certainly, this is just one data set of known TTPs, but just 10 groups being associated with about a third of well-known TTPs, it just demonstrates the breadth of techniques and methods used by these groups, he says.The two main avenues of compromise are social engineering and exploitation of unpatched vulnerabilities, according to Mark Bowling, chief information, security, and risk officer atExtraHop, a cloud-native cybersecurity solutions company.Social engineering conducted via tactics like phishing and smishing can lead to compromised credentials that grant threat actors system access, which can be leveraged for espionage and ransomware attacks.Related:Charming Kitten (aka CharmingCypress, Mint Sandstorm, and APT42), for example, leveraged a fake webinar to ensnare its victims, policy experts in the US, Europe, and Middle East.Unpatched vulnerabilities, whether directly within an organizations systems or its larger supply chain, can also be a useful tool for threat actors.They find that vulnerability and if that vulnerability has not been patched quickly, probably within a week, an exploit will be created, says Bowling.The joint advisory listed several CVEs that Iranian cyber actors leverage to gain initial access. Patches are available, but the advisory warns those will not be enough to mitigate the threat if actors have already gained access to vulnerable systems.Potential VictimsWho are the potential targets of ongoing cyber campaigns of Iran-based threat actors? The joint advisory highlighted defense, education, finance, health care, and government as sectors targeted by Iran-based cyber actors.What is the case with a lot of nation-state-sponsored threat activity right now, it's targeting a little bit of anyone and everyone, says Small.As the countdown to the presidential election grows shorter, threat actors could be actively carrying out influence campaigns. This kind of activity is not novel. In 2020, two Iranian nationals posed as members of the far-right militant group the Proud Boys as a part of a voter intimidation and influence campaign. Leading up to the 2024 election, we have already seen the hack and leak attack on the Trump campaign.Other entities could also fall prey to Iranian threat actor groups looking to spread misinformation or to simply create confusion. It's possible that they may target government facilities, state or local government, just to add more chaos to this already divided general election, says JP Castellanos, director of threat intelligence for Binary Defense, a managed detection and response company.Vulnerable operational technology (OT) devices have also been in the crosshairs of IRGC-sponsored actors. At the end of 2023, CISA, along with several other government agencies, released an advisory warning of cyber activity targeting OT devices commonly used in water and wastewater systems facilities.In 2023, CyberAv3ngers, an IRGC-affiliated group, hacked an Israeli-made Unitronics system at a municipal water authority in Pennsylvania. In the wake of the attack, screens at the facility read: "You Have Been Hacked. Down With Israel, Every Equipment 'Made In Israel' Is CyberAv3ngers Legal Target."The water authority booster station was able to switch to manual operations, but the attack serves as an ominous warning.The implications there were pretty clear that something else further could have been done tampering with the water levels and safety controls, things along those lines, says Small.As the Israel-Hamas war continues, organizations in Israel and allied countries could continue to be targets of attacks associated with Iran.The education sector has also seen elevated levels of Iran-based cyber activity, according to Small. For example, Microsoft Threat Intelligence observed Mint Sandstorm crafting phishing lures to target high-profile individuals at research organizations and universities.Escalating ThreatsIran is one of many nation state threat actors actively targeting public and private sector organizations in the US. Russia, North Korea, and China are in the game, too. In addition to politically motivated threat actors, enterprise leaders must contend with criminal groups motivated not by any specific flag but purely by profit.As a cyber defender, how much bandwidth do you have? How many groups can you possibly keep track of? We're always talking about prioritization, says Small.Castellanos points out that Iran is sometimes considered a lower tier threat, but he thinks that is a mistake. I would strongly recommend to not treat Iran as something not to worry about, he warns.Enterprise leaders are increasingly pressed to consider geopolitical tensions, the risks their organizations face in that context, and the resources available to mitigate those risks.Bowling stresses the importance of investing in talent, processes, and technology in the cybersecurity space.You can have good processes, and you can have good people. But if you don't have the technology that allows you to see the attackers and allows you to respond faster to the attack, then you're not going to be successful, he says.As enterprises continue to combat cyber threats from Iran, as well as other nation states and criminal groups, information sharing remains vital. That sharing of information [and] intelligence, that's actually what leads to a lot of these alerts being published and then it becomes usable by the rest of the community, says Small.0 Σχόλια 0 Μοιράστηκε 73 Views
και άλλες ιστορίες