


News and Analysis Tech Leaders Trust
1 people like this
269 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
Medallion Architecture: A Layered Data Optimization Modelwww.informationweek.comMartin Fiore, Santhosh Kumar LalgudiFebruary 21, 20256 Min ReadFreer Law via Alamy StockAlong with the emergence of generative artificial intelligence (GenAI) has come a surging demand for data and data center capacity to host growing AI workloads. And more and more organizations find themselves in the race to build the infrastructure and data center capacity capable of supporting the current and future use of AI and machine learning (ML).For finance functions, high-quality, well-organized, and trustworthy data is essential in the development of effective AI-driven operating models. And while speed is a big factor, trust and safety are even greater concerns in a technology environment where there are few guardrails for AI risk management. Just think of the internet with no rules around e-commerce, privacy, or business and personal safety.So where does a management team get a handle on the critical issues around an AI approach that is both highly efficient from an operations standpoint and optimized for risk management? We believe in this case that the past can be the prologue: consider a principle known as the medallion architecture -- a commonly used industry framework for managing large-scale data processing in cloud environments. For many of the same reasons it works so well there, we also find it applies well to data engineering. Its particularly well suited for tax and finance operations, where data is one of the most valuable assets and for which flexible, scalable, and reliable data management is essential for regulatory compliance speed and accuracy.Related:A Layered ApproachThe reality is that data and AI are essentially inseparable in our new digital era. While data has existed for a long time without AI, AI does not exist without data. By extension, a solid data strategy is required for achieving meaningful returns on AI value, and medallion architecture is a highly effective data management tool that helps get the most out of an organizations AI investment. As a data engineering model, it organizes information into three distinct tiers of bronze, silver and gold medals. Each layer has a specific role in the data pipeline, designed to facilitate clean, accurate and optimized dataflows for downstream processes:Bronze: This is the raw data layer. The data is ingested from various sources, including structured, semi-structured and unstructured formats. At this stage, the data is stored in its original form without any significant transformation. This serves as a robust foundation, providing a full audit trail and allowing businesses to revisit the raw data for future needs.Related:Silver: In this intermediate stage, data from the bronze layer is cleaned, filtered and structured into a more usable format. This involves applying necessary transformations, removing duplicates, filling in missing data and applying quality checks. The silver layer acts as a reliable data set that can be used for analysis, but its still not fully optimized.Gold: This is the final stage of the data pipeline where the silver data is further refined, aggregated and structured for direct consumption by analytics tools, dashboards and decision-making systems. The gold layer delivers highly curated, trusted data thats ready for use in real-time reporting and advanced analytics.Applying the Benefits of Medallion Architecture in the Finance SectorFor financial institutions, data management needs are highly complex. Banks, trading firms and FinTech companies process enormous amounts of data daily, with requirements for accuracy, speed and regulatory compliance. Medallion architecture addresses the following needs.1. Improved data quality and governance. Financial institutions must ensure data accuracy and completeness in alignment with strict regulatory requirements, such as Basel III, the Sarbanes-Oxley Act (SOX) and MiFID II. The multilayered features of medallion architecture support data quality checks that can be applied at each stage. By moving from the bronze to gold layer, data undergoes multiple transformations and validations, improving accuracy and reducing errors. It also facilitates better data governance and traceability, allowing for easier auditing and compliance reporting.Related:2. Scalability for large data volumes. The financial sector often deals with massive data sets -- from transaction histories and market feeds to customer data. The layered approach makes it easier to scale these data pipelines. Since the raw data in the bronze layer is stored in its original form, it can handle the ingestion of high volumes of data without requiring immediate transformations. As data moves to the silver and gold layers, the architecture supports scalable processing frameworks that enable financial institutions to efficiently process large data sets.3. Faster time to insights. In fast-paced financial markets, speed is essential. Trading firms, for example, need real-time data to make decisions on market movements. The medallion structure allows financial institutions to separate raw data ingestion from data analytics. Analysts can start working on silver and gold layers for immediate insights, while engineers refine and clean the data in the background. This results in quicker access to actionable insights, essential for high-frequency trading or real-time fraud detection.4. Flexibility and agility. Medallion architecture offers flexibility in handling diverse data sources and types -- an essential feature in the financial industry, where data comes from numerous channels. The bronze layers ability to store raw data in its native form makes it easy to adapt to new data types or sources without needing immediate transformations, while the silver and gold layers can be adjusted to reflect new business requirements, market conditions or regulatory changes.5. Cost efficiency. Processing large volumes of financial data is expensive. Separating the raw data from the processed data helps reduce unnecessary data transformations and storage costs. Financial institutions can optimize their compute resources by running complex transformations only when needed, thus lowering operational costs.6. Enhanced security and risk management. Raw data in the bronze layer can be heavily restricted, with only authorized personnel able to access it, while the curated gold layer can be more widely available for analysis. This segmentation of data access allows for tighter security controls and reduces the attack surface.7. Advanced analytics and machine learning. From algorithmic trading to fraud detection and credit risk analysis, ML and AI are very important to the financial industry, and this approach facilitates advanced analytics by providing high-quality, structured data in the gold layer. Additionally, having access to both silver and bronze layers enable data scientists to work with both historical and refined data, both of which are essential for building accurate predictive models.Medallion architecture is an effective framework for financial sector data management and processing in the digital era. Its layered approach offers financial institutions the capability to handle vast volumes of data efficiently, while providing data quality, compliance and scalability. Using this layered approach, financial firms gain better control over their data pipelines, reduce costs and drive innovation through advanced analytics. As data management plays an increasingly crucial role in contemporary business, this framework helps position financial firms for success in a data-driven world.About the AuthorsMartin FioreDeputy Vice Chair of Tax, EY AmericasMartin Fiore is EY Americas Deputy Vice Chair of Tax. He is author of the 2021 award winning book, Humanity Reimagined, forecasting current advancements in human and technology convergence and the need for guardrails in developing AI and other transformative technologies.See more from Martin FioreSanthosh Kumar LalgudiData & Cloud Technology Solutions Leader, EY AmericasSanthosh Kumar Lalgudi is the Data & Cloud Technology Solutions Leader in the EY Americas Tax Technology & Transformation practice. He is the author of several articles on the role of data and technology in modernizing finance and tax functions.See more from Santhosh Kumar LalgudiNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·44 Views
-
Driving Innovation and Efficiency Through Automationwww.informationweek.comBrandon Taylor, Digital Editorial Program ManagerFebruary 21, 20255 Min ViewInvesting in substantial automation that enables agile and strategic business operations are vital to compete and grow in todays digital landscape.In this archived keynote session, Rachel Lockett, vice president of business technology solutions and operations at Surescripts, and Jason Kikta, CISO and senior vice president of product at Automox, discuss how organizations are utilizing automation to find value and regroup to meet challenges.This segment was part of our live virtual event titled, The CIO's Guide to IT Automation in 2025: Enabling Innovation & Efficiency. The event was presented by InformationWeek on February 6, 2025.A transcript of the video follows below. Minor edits have been made for clarity.Rachel Lockett: So, the outcomes and consequences of alert fatigue in all its different forms can be ignored alerts, slowed response times, and ultimately not reacting with urgency when something is due. They can also result in burnouts. Since joining the healthcare field, I have heard more now about provider burnout.There have been news stories about alert fatigue resulting in things being missed and ignored that resulted in patient deaths. So again, let's make a correlation to the technology field. What have you seen in your experience? What have been the direst consequences and costly mistakes that you've seen because of alert fatigue and lack of automation?Related:Jason Kikta: I think one of the best and easiest examples for people to orient on when they think about it, especially at the intersection of IT and security, are the number of vulnerabilities. So, this is the slide that you and I showed the audience when we met last year. This was the projection for the number of CVEs.The number of security vulnerabilities in software was growing at an alarming rate and becoming a lot to process. We talked about this, and we said by the time we get to 2025 it's going to be up to 32,000 a year, and it's going to be bad. We had 28,000 in 2023, but then in 2024 we had 40,000! It totally blew out the curve.Now, there is some nuance here, right? This is not necessarily a bad thing in terms of cybersecurity, because part of this is vendors have gotten better as well as security researchers. They've gotten better at finding these vulnerabilities, and vendors have become more disciplined in reporting these vulnerabilities.So, there is some healthiness to those numbers being high, but it still doesn't change the base condition. I spoke to a company late last year, and their security team was trying to manually read through every CVE that was released by every vendor and match it up with their environment to see if they had it somewhere in their tech stack.Related:Then, they would make a manual determination about how they were going to proceed. Were they going to patch it? If so, how quickly were they going to patch it? It was mind boggling. I thought to myself, how do you keep up? The gentleman I spoke to chuckled and said, well, we keep up poorly. Poorly is the answer.RL: Right, because first, that's intensive labor based on the cost involved. But how can you catch up on time? There's going to be a delayed response because there's just too much volume.JK: Another great example is the National Vulnerability Database where they can't even keep up. They are the ones charged with maintaining the global authoritative database, and they've had trouble keeping up. And this was as of last summer.They don't have newer numbers out, but their last announcement in November was that we've added a lot of external contractor support, and paid a lot of money to bring on this extra capacity. We are now keeping up with all the new ones, but we're still behind in the backlog. We don't have an effective way to burn that down.Related:These problems are not getting better, in fact, they're getting worse on the demand side. So, we must fix the supplies, or maybe it's backwards. Maybe it's the supply side, right? The amount that needs to be dealt with is just going to keep rising, and the ability to keep up with it manually is going to be overwhelming. So, you must fix it through better automation and thinking through these processes more holistically.RL: You brought up exactly what I wanted to talk about next. Again, always coming at these things from the human impact perspective. A common solution, which you just described, is to throw more people at the problem, right? Hire more contractors and let's just keep throwing more people at the problem.Things like rotating responsibilities between team members can help to reduce the impact of alert fatigue for a while, but it's just not a sustainable long-term solution. There's also another industry trend that's making this harder and harder to do, and that's the shortage of technology resources. We talked about this last summer.What's happened since then? Is the problem of scarce technology resources getting better? Is it getting worse? Is it remaining the same? Where are we at?Watch the archived CIO's Guide to IT Automation in 2025: Enabling Innovation & Efficiency live webinar on-demand today.About the AuthorBrandon TaylorDigital Editorial Program ManagerBrandon Taylor enables successful delivery of sponsored content programs across Enterprise IT media brands: Data Center Knowledge, InformationWeek, ITPro Today and Network Computing.See more from Brandon TaylorNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·47 Views
-
AI Is Improving Medical Monitoring and Follow-Upwww.informationweek.comEnsuring continuity of care in a clinic or hospital is a nightmare of complexity. Coordinating test results, imaging, medication, and monitoring of vital signs has proven challenging to an industry reliant on ponderous technologies and deficient staffing. When patients are dealing with unfolding health crises and chronic conditions or recovering from procedures at home, managing their care becomes even more complex.Doctors may miss important findings that can impact patients prognosis and treatment --leaving those patients without necessary information on how to make healthcare decisions.Some 97% of available data may go unreviewed per the World Economic Forum. And Electronic Health Records (EHRs) are messy and riddled with errors.Following up with patients to ensure that they are receiving proper treatment based on the 3% of data that is reviewed constitutes a significant burden on providers.Even when patients are stable and their cases have received thorough review, they may find that obtaining insights on how to best manage their situations is next to impossible, placing multiple phone calls to overloaded call centers only to spend hours on hold, poring over pages of inscrutable instructions, and attempting to interpret their own results using unreliable home tests and monitors.Related:Artificial intelligence technologies have shown promise in managing some of the worst inefficiencies in patient follow-up and monitoring. From automated scheduling and chatbots that answer simple questions to review of imaging and test results, a range of AI technologies promise to streamline unwieldy processes for both patients and providers.These innovations promise to both free up valuable time and increase the likelihood that effective care is delivered. AI chart reviews may detect anomalies that require follow-up and AI review of images may detect early signs of conditions that escape human review.But, as with other AI technologies, keeping humans in the loop to ensure that algorithmic errors do not result in damage remains challenging. When is a chatbot not enough? And when it isnt, can a patient actually talk to their provider?InformationWeek delves into the potential of AI-managed medical monitoring and follow-up, with insights from Angela Adams, CEO of AI imaging follow-up company Inflo Health; and Hamed Akbari, an assistant professor in the Department of Bioengineering at Santa Clara University who works on AI and medical imaging.Administrative AIAnyone who has gone through the healthcare system -- so, basically everyone -- knows how hideous the administrative procedures can be. Its bad enough trying to schedule a primary care appointment with some clinics. But what about patients who are in recovery from surgery or suffering from debilitating chronic conditions?Related:AI solutions may smooth out these processes for both the patient and the clinic. AI-assisted platforms offer efficient means of scheduling appointments, refilling prescriptions and getting answers to simple questions about treatment. Patients can simply respond to a text message or fill out a form indicating their needs.Some 60% of respondents to a 2022 survey preferred intuitive, app-like services from their providers.Patients may be more inclined to respond to texts or emails generated by AI programs because they can do so on their own time rather than taking a call at an inconvenient moment. They are thus able to provide useful feedback unrelated to their immediate needs -- on how they rate their experience with a provider for example -- when they might otherwise not be willing to do so.In the case of anomalous responses -- a complication or a dosage problem -- a staff member can then follow up with a call or message to address the issue personally. Missed appointments can be flagged, indicating the need for follow-up and also coordinating openings that might be used by other patients who might otherwise need to wait.Related:More than 70% of patients prefer self-scheduling according to an Experian report. And up to 40% of calls to clinics relate to scheduling. Reduced call volumes can lead to enormous cost savings and free up time for dealing with more exigent issues that require attention and analysis by live medical professionals.Medication Follow-Up and AdherenceAdherence to medication regimens is essential for many health conditions, both in the wake of acute health events and over time for chronic conditions.AI programs can both monitor whether patients are taking their medication as prescribed and urge them to do so with programmed notifications. Feedback gathered by these programs can indicate the reasons for non-adherence and help practitioners to devise means of addressing those problems.Adherence to diabetes management regimens is complicated by lifestyle, socioeconomic status, severity of disease and unique personality factors, for example. AI programs that take these factors into account may assist practitioners and patients in refining protocols so that they are both realistic and effective.A study that used a smartphone app to remind stroke victims to take their medication and then followed up with blood tests to ensure that they had done so found significant increases in adherence to the drug protocol, resulting in better health outcomes.AI programs can also use patient data to devise optimal dosing for drugs. Therapeutic drug monitoring has historically been a challenge given the differing reactions of patients to drugs, both alone and in combination, according to their unique physiology.They can even correlate dosing to the effects of the drugs -- a significant advance for conditions in which treatments themselves can have deleterious effects. Chemotherapy drugs, for example, can thus be optimized to maximize effectiveness and minimize side effects.Monitoring of Chronic ConditionsUsing AI to monitor the vital signs of patients suffering from chronic conditions may help to detect anomalies -- and indicate adjustments that will stabilize them. Keeping tabs on key indicators of health such as blood pressure, blood sugar, and respiration in a regular fashion can establish a baseline and flag fluctuations that require follow up treatment using both personal and demographic data related to age and sex by comparing it to available data on similar patients.Remote patient monitoring (RPM) devices, such as blood pressure monitors, pulse oximeters and glucose meters, can be linked to AI programs that analyze the data they collect and draw useful conclusions from it. Education and health literacy levels vary among populations.Automated summaries can assist patients in understanding the complexities of the information used to determine their status and assume agency in managing their conditions. Even highly educated patients who are already invested in their own care will likely benefit from the efficiency of having their information synthesized in an easily comprehensible manner.Simplified readouts generated by AI programs can be especially helpful when patients are suffering from multiple conditions -- comorbidities -- that can make it even more difficult for them to manage their own care and communicate their needs to providers.Both acute changes and patterns, such as a heart rate that lowers over time, can help providers to assess when interventions such as medication adjustment and even surgery may be necessary.Hamed Akbari, Santa Clara UniversityPrognosis can be improved if deterioration is detected early. Even in the case of necessary surgery, it can be scheduled prior to an emergent and life-threatening event. In situations where the condition may become life-threatening or terminal, AI may even be able to plot out the likely progression of the disease based on lab findings, allowing for a more realistic approach to treatment and end-of-life planning.We have many patients in our studies, Akbari says. We know when they passed away. We can determine the length of the survival based on our model.Imaging Follow-UpAI has also shown great promise in augmenting human analysis of radiology findings -- X-rays, MRIs, and CT scans, among other technologies. While examination by specialists remains crucial, AI programs now offer increasingly sophisticated means of detecting subtle patterns that may evade even the most skilled radiologists.An AI analysis of mammograms found that the program was more effective than humans at detecting early signs of breast cancer, for example. Adams relates the story of a friend and colleague whose breast cancer was detected on imaging while she was hospitalized for another condition. However, she was never notified and the cancer fatally metastasized. Adams and her colleagues were horrified that this incidental finding had been missed.They dug further and found that such incidents were far from uncommon. Even findings that are detected by radiologists do not come to the attention of providers and patients due to time constraints.It was astounding to us that nearly 50 to 60% of those follow-ups were just missed, Adams said. This led to Inflo Healths mission -- reducing missed results and ensuring proper follow-up.Non-critical follow-ups definitely need care, Adams urges. But findings that are not part of the critical workflow are tossed into a pile.Other programs have improved detection of such conditions as pneumonia and appendicitis.And the identification of novel diseases, such asCOVID-19, may also be improved by AI image examination. Radiologists may not be as familiar with the presentation of new diseases on imagery. Rapid identification of patterns in the progression ofa new disease using AI programs may beable to assist in diagnosis.Interpretation of microscopic images has also been improved by AI, allowing for quicker identification of pathogens in samples taken from patients.The increased follow-up rates can be substantial -- Inflo Healths partnership with the East Alabama Medical Center resulted in a 74% increase in follow ups on lung nodules detected by their technology in radiology reports while reducing the time it took to flag the findings by 95%.A study on detection of aortic aneurysms found that detection of additional complications was increased by 80% using an AI program. While those complications may have eventually been discovered by human radiologists, the research found that reporting time was reduced by 63%. Other research suggests that AI-assisted scheduling follow-up has improved detection of aneurysm complications. Another project discovered that an AI-enhanced workflow significantly improved follow-up by patients diagnosed with a diabetes-related eye condition.AI programs can also simplify complex arrays of imagery. AI can uncover patterns and relationships in imaging data that are not visible to the human eye. You can come up with one map that shows multiple MRI sequences. So instead of looking at five or six different MRI sequences, you just look at one, Akbari says. And by analyzing large databases of images and the notes that accompany them, these programs can detect early signs of pathology and thus facilitate earlier, more effective treatment.Such results suggest that collaboration between humans and AI may provide benefits to both patients and the institutions that serve them. Integration into the actual care of the patient is key. If a problem is flagged by an AI program and nothing happens, the finding cannot be acted upon. Adams is insistent that both the patient and provider must be notified when AI programs pick up a finding that has been missed.Angela Adams, Inflo HealthWe didn't just focus on the math and the AI problem, she says. We focused on taking that information that we identified and making sure that it worked within the clinical workflow.Surgical and Hospitalization Follow-UpOnce a condition has been diagnosed and treated, an additional array of issues emerges. In addition to coordinating appointments to assess progress, at-home care needs to be tracked.Post-surgical patients are likely to have numerous questions about how to monitor their conditions and ensure that their recovery is proceeding as predicted. This can result in time-consuming phone calls and emails for both patient and provider. Patients are often provided with packets of confusing information that attempt to guide them through recovery. They are likely to encounter situations that are not explained adequately by these materials. Or they may not receive any directions at all.Adams points to the challenges of following up on hospital visits. If you think about how quickly a patient is in and out of the ER, many times the final report doesnt come back until the patients already out. It doesn't even give an opportunity for the clinical team to talk to the patient, she notes.Specifically designed chatbots may be able to handle simpler questions that arise and simplify challenging language that some patients may find difficult to interpret.While it might seem superficially mundane, AI-generated follow-up calls that ensure appointments in the wake of surgeries or hospitalizations may be hugely beneficial. Rehospitalizations in the wake of health events, planned or unplanned, are an indicator of complications and even mortality. They are also a financial liability for hospitals. Medicare reduces reimbursements if patients suffering from certain conditions are readmitted within 30 days, for example.Keeping patients on track with their care plans, both at home and in follow-up examinations, can reduce rehospitalization events. Manual phone calls have been shown to be helpful in this regard but are time-consuming for both parties. But even automated calls and surveys can facilitate necessary follow-up and reduce rehospitalization.AI follow-up must be considered carefully, though. While it may result in efficiencies, some patients will likely be hesitant to direct their questions to automated systems while in a tenuous state. One study found that while AI-managed surgical follow-up calls were useful in collecting data and handling administrative tasks, only 11% of calls handled actual medical consultation.These systems must be designed to identify the need for conversations and in-person examination rather than serve as a barrier. A system designed for cataract surgery follow up, for example, specifically filters routine questions and concerns from those that might necessitate additional treatment.Technology that is currently used for daily monitoring of healthy patients may also be useful in monitoring patients with certain conditions. One study was cautiously optimistic about using Apple watches to monitor heart abnormalities in cardiac surgery patients.Personalization of TreatmentThe increased sense of autonomy and control offered by these algorithmic approaches may, paradoxically, have a humanizing approach, making patients feel less like lab rats and more like humans who can engage in their own care.The use of AI to synthesize both historical and live data about individual patients with general data related to their conditions drawn from research and medical record analysis can give both patients and providers a much clearer picture of how to approach their treatment.Medical professionals often do not have the time -- or inclination -- to make the sophisticated calculations required to devise optimum care. And patients often find it challenging to advocate for themselves while dealing with both challenging health problems and masses of unfamiliar information.AI can detect patterns that neither party would be capable of perceiving independently. Once these patterns are identified, patients and providers can more effectively collaborate on how to proceed -- whether that be tinkering with the dosage of medications, pursuing follow up on potentially alarming diagnostic findings, or simply discussing potential lifestyle changes and treatment approaches that might affect long-term prognosis.I think the future of AI is in integrated diagnosis and treatment planning, Akbari says. Communication between different specialties is very limited.Unless you have technology married with process and people, you're always going to have failure points, Adams adds. I would love to see more healthcare AI vendors focus on a holistic approach. When there's an AI failure in healthcare, it affects all of us. We need to establish trust with clinicians, and the only way to do that is to establish learning partnerships, where we can iterate and learn.0 Comments ·0 Shares ·53 Views
-
What Tech Workers Should Know About Federal Job Cuts and Legal Pushbackwww.informationweek.comThe Trump administration and its Department of Government Efficiency (DOGE) are firing and laying off thousands of government employees across multiple agencies. In 2024, the federal government employed approximately 116,000 IT workers, and that isnt counting contractors and military and post office employees, Computerworld reports.These legions of federal tech workers are in the same boat as all federal employees, afloat on a sea of chaos and uncertainty. Several lawsuits have been filed in the wake of the job reductions, but litigation is not a fast-moving process.InformationWeek spoke to three attorneys about the job cuts, legal action, and what could lie ahead for federal workers.The TerminationsThe total number of federal employees impacted thus far is not clear. Approximately 75,000 employees accepted the deferred resignation offer, referred to as Fork in the Road, to leave their jobs, according to The Hill. But the program has been paused following a ruling by a federal judge.Probationary employees, people who typically have been in their roles for less than a year, have been a significant target of layoffs. AP News reports that there are 220,000 federal employees who had been working in their roles for less than a year as of March 2024.Related:I don't think we're getting any clear or transparent data about the segments of the government that are being most impacted, Areva Martin, civil rights attorney and managing partner and founder of law firm Martin & Martin, tells InformationWeek.The workforce reductions are wide-ranging and the Consumer Financial Protection Bureau (CFPB) is essentially shuttered. Jobs are being cut at the Department of Agriculture, Department of Education, Department of Energy, Department of Health and Human Services, Department of Homeland Security, Department of the Interior, Department of Veterans Affairs, Environmental Protection Agency, Office of Personnel Management, and the list goes on.The Cybersecurity and Infrastructure Security Agency (CISA), a significant repository of technical talent, is also facing cuts. Maybe a few weeks ago, we all thought that there was categories employees that would be protected -- like IT workers, like Department of Defense employees, employees that are essential to our national security like the nuclear safety employees -- that were terminated, says Liz Newman, member and litigation director at The Jeffrey Law Group, which focuses on federal sector employment disputes.The LawsuitsA flurry of lawsuits was swift to follow the firings and layoffs ordered by the White House and DOGE.Related:Several employees who received high marks on recent performance reviews were among those to be caught up in the mass firings, Reuters reports.When you're letting people go and you're citing things like their performance and their fit, but at the same time you're letting large groups go indiscriminately without surely looking at their performance and fit, I think that's opening up this administration to some legal liability, says Newman.Indeed the Trump administration faces class actions, representing thousands of people, for the way it is handling the firing of probationary employees.Alden Law Group and legal services nonprofit Democracy Forward are representing civil servants across nine agencies, with plans to cover others, in a complaint filed with the Office of Special Counsel (OSC). The complaint could go before the Merit Systems Protection Board (MSPB), a government agency that aims to protect Federal merit systems against partisan political and other prohibited personnel practices, according to the MSPB website.Complicating matters, the Trump administration is attempting to fire Special Counsel Hampton Dellinger, the head of the OSC, Federal News Network reports.Related:While that drama unfolds, other pushback is underway. Several labor groups representing federal employees are suing the Trump administration, arguing that the Office of Personnel Management (OPM) does not have the authority to order the mass firings that occurred, Reuters reports.The National Treasury Employees Union (NTEU) represents more than 1,000 frontline employees, and it is suing the administration to challenge the closure of the CFPB.As DOGE takes an axe to government agency jobs in the name of saving money and improving efficiency, alarm bells around its access to sensitive data have been clanging. Several lawsuits are underway on that front.Theyve been given unfettered access in some cases to the most private and sensitive information of not only government employees but of US citizens I've been tracking lawsuits filed about violations of the Privacy Act of 1974, says Martin.How successful could legal pushback be?I think some of the employees, particularly those employees who again are governed by labor contracts [and] those employees who are civil service employees, they're going to be met with greater success because their due process rights have been violated, and there are clear contractual terms that define how they can be terminated, says Martin.The outcome of these lawsuits, and the more likely to come, is far from decided, and it could take years for some cases to reach their conclusion.Some of these lawsuits may go past Trumps four years in office. But many of them, I suspect, will be resolved during his term in office, says Martin.She anticipates that some of these cases may make their way to the Supreme Court. An Uncertain FutureThousands of federal workers are facing an unclear future: those who accepted the Fork in the Road offer, those who have been terminated, and those who were fired and then asked to come back. The possibility of more job cuts still looms; these frenetic firings took place in the very early days of the Trump administration.We hear a lot of sadness from them [federal employees], even more so than the fear of not getting paid is the of disappointment in how this has all played out, says Newman.As we see cases progress through the legal system, there are questions about action the current administration may take to make it easier to terminate federal employees in the future.This administrations goal is to make it easier for all employers, not just the federal government but for private employers, too, to be able to fire employees without regards for any of the rights that they previously may have [had], says Martin.While some lawsuits may ultimately be successful, it is likely that many people will permanently lose their federal jobs. For those who remain, there are plenty of questions about the future of their employment.Employees will be subject to enhanced standards of suitability and conduct as we move forward, according to the Fork in the Road email sent to federal employees.Brett OBrien, founder and partner at National Security Law Firm, a military and federal administrative law firm, notes that this could mean a close examination of security clearances. This is pertinent to IT professionals because most of them have to have fairly high clearances, he says. If you have anything that could be concerning from a security clearance perspective, take care of it now.Elon Musk, head of DOGE, has been vocal about pushing for automation to replace human jobs in the government. But that opens up questions about the IT workforce. There's got to be someone behind there to troubleshoot and fix the problems, says OBrien.What about the long-term outlook for federal employment in general? It will take time to understand the full impact of this upheaval and to see the outcome of legal action, but this has potential to change the way people view federal employment.Traditionally, federal employment has been seen as a fairly steady and consistent career path. And I think that you're starting to see some of those thoughts be challenged and changed, says OBrien. Are you going to have talented people still wanting to go into the federal service and maybe make a career of it? They might start looking at it as an opportunity to get a really good experience and then, look to jump out quicker.0 Comments ·0 Shares ·48 Views
-
Risk Leaders: Follow These 4 Strategies When Transitioning To Continuous Risk Managementwww.informationweek.comCody Scott, Senior Analyst, ForresterFebruary 20, 20255 Min ReadParadee Kietsirikul via Alamy StockYour organizations single biggest risk is an ineffective risk management program. Organizations tend to focus on compliance objectives while inadvertently undervaluing or deprioritizing risks that could have significant impacts for many reasons. Compliance goals are prescriptive, with concrete actions to accomplish, making compliance a generally straightforward activity. Risk, on the other hand, is dynamic and complex.During the early 2000s, some of the largest financial scandals (Enron, WorldCom, Tyco) rocked the business world to its core, unleashing a new regulatory wave of corporate governance and internal controls requirements. In its wake, the three lines of defense (3LOD) were born. And when the Institute of Internal Auditors picked it up 10 years later, the industry branded and prescribed 3LOD as the cure to poor risk management. Yet, like prescription drugs, regulatory support doesnt guarantee effectiveness.Enter a More Modern Risk Approach: Continuous Risk ManagementHeres where we need the right prescription for managing risk. Continuous risk management is a modern approach to ensure that organizations not only take on the right risks in support of their strategic direction but also follow a holistic process to bring risk-based planning and mitigation oversight into the value chain -- a significant gap in the 3LOD approach and in most risk programs today. Continuous risk management unites the businesss strategic and operational sides under a common goal -- a pursuit of value -- and formalizes a process, key decision points, and opportunities to change course as project conditions and risk tolerances change over time.Related:Continuous risk management as a model has two main components:The first loop (identify, plan, analyze, and design) emphasizes strategic planning and the role of leaders in defining the pursuit of value to which risk and compliance projects will be aligned, ensuring that the pursuit is successful.The second loop (implement, respond, measure, and monitor) highlights the implementation work that control owners and operations teams perform to keep the pursuit of value on track and optimize mitigation strategies as new risks unfold. Importantly, the model features key inflection points as teams cycle through both loops that allow them to reevaluate decisions and escalate issues accordingly.Keys To Getting Continuous Risk Management RightFor organizations to get to continuous risk management, they must do these four things:1. Use the 3LOD model the right way to define roles and ensure segregation of duties. Contrary to popular belief, 3LOD is not a regulatory requirement. If your organization has adopted 3LOD for segregation of duties, you dont need to abandon it. Instead, use 3LOD for its intended purpose: to appropriately define roles and responsibilities. Use the model in combination with the 3LOD to answer the following: What work do we need to do? How should we do it? Who should be involved in the process?Related:2. Use the continuous risk model to identify gaps in your existing program and create a roadmap to improve the supporting processes, skills, and technology needed. Fortunately, you dont need to start from scratch to get to continuous risk management, as many pieces are likely already in place. For example, an organizations project management office might operate separately from its enterprise risk and compliance program, indicating a process and communication gap across multiple phases. A security program might operate an extensive tech stack but hasnt integrated the outputs to automatically measure and monitor the effectiveness of controls. Align the continuous risk management phases to your program, document how your current processes support these phases today, and prioritize pain points or disconnects that inhibit any phase.Related:3. Focus on the pursuit of value. A value is any goal, objective, regulatory requirement, or business outcome that the organization decides to pursue, such as acquiring a new company, entering a new market, or targeting a new customer segment. Value can be operational, like updating an internal process, changing critical suppliers, or maturing existing operational requirements. Value can also come from a technology initiative, such as launching a new application or service or modernizing legacy technology systems. Anchor risk management alongside and throughout the pursuit of value to establish the appropriate context, evaluate trade-offs, and support decision-making that accelerates, rather than impedes, growth, innovation, and resilience.4. Use the inflection points in the model as opportunities to accelerate governance reviews and approvals. When organizations plan a mitigation project, they might use an assessment to secure budget approval, but at this point, leaders and mitigation owners disconnect, assuming that theyll be informed if the effort is derailed. This reinforces a sunk cost scenario where controls are implemented with little regard to changing strategic or tactical situations until the end of the effort. Use the first infection point to decide which risks will be accepted or transferred -- and which will be controlled and mitigated throughout the lifecycle. Use the change management inflection point for ongoing feedback or to course-correct. Combined, the initial risk decision and ongoing change management ensure tight collaboration between stakeholders, provide assurance that the organization is managing risk acceptably, and confirm that mitigation and compliance activities fully align with the pursuit of value.Continuous risk management is conceptually simple yet requires organizations to interrogate their existing risk practices. This means thinking about which practices work well, which ones are lacking, which ones create unnecessary friction, and how technology can shift risk management to the left to accelerate business outcomes. Leave the side effects of poor risk management in the past and transform your program with a proactive solution.About the AuthorCody ScottSenior Analyst, ForresterCody is a senior analyst at Forrester covering cyber risk management with a focus on cyber risk quantification (CRQ), enterprise risk management (ERM), and governance, risk, and compliance (GRC). Prior to Forrester, Cody served as the first chief cybersecurity risk officer of the National Aeronautics and Space Administration (NASA). He holds a BA in international affairs from the George Washington University and is also a certified expert risk management framework professional.See more from Cody ScottNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·59 Views
-
Is a Small Language Model Better Than an LLM for You?www.informationweek.comPam Baker, Contributing WriterFebruary 20, 202511 Min ReadTithi Luadthong via Alamy StockWhile its tempting to brush aside seemingly minimal AI model token costs, thats only one line item in the total cost of ownership (TCO) calculation. Still, managing model costs is the right place to start in getting control over the end sum. Choosing the right sized model for a given task is imperative as the first step. But its also important to remember that when it comes to AI models, bigger is not always better and smaller is not always smarter.Small language models (SLMs) and large language models (LLMs) are both AI-based models, but they serve different purposes, says Atalia Horenshtien, head of the data and AI practice in North America at Customertimes, a digital consultancy firm.SLMs are compact models, efficient, and tailored for specific tasks and domains. LLMs, are massive models, require significant resources, shine in more complex scenarios and fit general and versatile cases, Horenshtien adds.While it makes sense in terms of performance to choose the right size model for the job, there are some who would argue model size isnt much of a cost argument even though large models cost more than smaller ones.Focusing on the price of using an LLM seems a bit misguided. If it is for internal use within a company, the cost usually is lass than 1% of what you pay your employees. OpenAI, for example, charges $60 per month for an Enterprise GPT license for an employee if you sign up for a few hundred. Most white-collar employees are paid more than 100x that, and even more as fully loaded costs, says Kaj van de Loo, CPTO, CTO, and chief innovation officer at UserTesting.Related:Instead, this argument goes, the cost should be viewed in a different light.Do you think using an LLM will make the employee more than 1% more productive? I do, in every case I have come across. It [focusing on the price] is like trying to make a business case for using email or video conferencing. It is not worth the time, van de Loo adds.Size Matters but Maybe Not as You ExpectOn the surface, arguing about model sizes seems a bit like splitting hairs. After all, a small language model is still typically large. A SLM is generally defined as having fewer than 10 billion parameters. But that leaves a lot of leeway too, so sometimes an SLM can have only a few thousand parameters although most people will define an SLM as having between 1 billion to 10 billion parameters.As a matter of reference, medium language models (MLM) are generally defined as having between 10B and 100B parameters while large language models have more than 100 billion parameters. Sometimes MLMs are lumped into the LLM category too, because whats a few extra billion parameters, really? Suffice it to say, theyre all big with some being bigger than others.Related:In case youre wondering, parameters are internal variables or learning control settings. They enable models to learn, but adding more of them adds more complexity too.Borrowing from hardware terminology, an LLM is like a systems general-purpose CPU, while SLMs often resemble ASICs -- application-specific chips optimized for specific tasks, says Professor Eran Yahav, an associate professor at the computer science department at the Technion Israel Institute of Technology and a distinguished expert in AI and software development. Yahav has a research background in static program analysis, program synthesis, and program verification from his roles at IBM Research and Technion. Currently, he is CTO and co-founder of Tabnine, an AI-coding assistant for software developers.To reduce issues and level-up the advantages in both large and small models, many companies do not choose one size over the other.In practice, systems leverage both: SLMs excel in cost, latency, and accuracy for specific tasks, while LLMs ensure versatility and adaptability, adds Yahav.Related:As a general rule, the main differences in model sizes pertain to performance, use cases, and resource consumption levels. But creative use of any sized model can easily smudge the line between them.SLMs are faster and cheaper, making them appealing for specific, well-defined use cases. They can, however, be fine-tuned to outperform LLMs and used to build an agentic workflow, which brings together several different agents -- each of which is a model -- to accomplish a task. Each model has a narrow task, but collectively they can outperform an LLM, explains, Mark Lawyer, RWS' president of regulated industries and linguistic AI.Theres a caveat in defining SLMs versus LLMs in terms of task-specific performance, too.The distinction between large and small models isnt clearly defined yet, says Roman Eloshvili, founder and CEO of XData Group, a B2B software development company that exclusively serves banks. You could say that many SLMs from major players are essentially simplified versions of LLMs, just less powerful due to having fewer parameters. And they are not always designed exclusively for narrow tasks, either.The ongoing evolution of generative AI is also muddying the issue.Advancements in generative AI have been so rapid that models classified as SLMs today were considered LLMs just a year ago. Interestingly, many modern LLMs leverage a mixture of experts architecture, where smaller specialized language models handle specific tasks or domains. This means that behind the scenes SLMs often play a critical role in powering the functionality of LLMs, says Rogers Jeffrey Leo John, co-founder and CTO of DataChat, a no-code, generative AI platform for instant analytics.In for a Penny, in for a PoundSLMs are the clear favorite when the bottom line is the top consideration. They are also the only choice when a small form factor comes into play.Since the SLMs are smaller, their inference cycle is faster. They also require less compute, and theyre likely your only option if you need to run the model on an edge device, says Sean Falconer, AI entrepreneur in residence at Confluent.However, the cost differential between model sizes comes from more than direct model costs like token costs and such.Unforeseen operational costs often creep in. When using complex prompts or big outputs, your bills may inflate. Background API calls can also very quickly add up if youre embedding data or leveraging libraries like ReAct to integrate models. It is for this reason scaling from prototype to production often leads to what we call bill shock, says Steve Fleurant, CEO at Clair Services.Theres a whole pile of other associated costs to consider in the total cost of ownership calculation too.It is clear the long-term operational costs of LLMs will be more than just software capabilities. For now, we are seeing indications that there is an uptick in managed service provider support for data management, tagging, cleansing and governance work, and we expect that trend to grow in the coming months and years. LLMs, and AI more broadly, put immense pressure on an organization to validate and organize data and make it available to support the models, but most large enterprises have underinvested in this work over the last decades, says Alex Bakker, distinguished analyst, with global technology research and advisory firm ISG.Over time, as organizations improve their data architectures and modernize their data assets, the overhead of remediation work will likely decrease, but costs associated with the increased use of data -- higher network consumption, greater hardware requirements for supporting computations, etc. -- will increase. Overall, the advent of AI probably represents a step-change increase in the amount of money organizations spend on their data, Bakker adds.Other standard business costs apply to models, too, and are adding strain to budgets. For example, backup models are a necessity and an additional cost.Risk management strategies must account for provider-specific characteristics. Organizations using OpenAI's premium models often maintain Anthropic or Google alternatives as backups, despite the price differential. This redundancy adds to overall costs but is essential for business continuity, says David Eller, group data product manager at Indicium.There are other line items more specific to models that are bearing down on company budgets too.Even though there are API access fees to consider, the synthesis of the cost of operational overhead, fine-tuning, and compute resources can easily supersede it. The ownership cost should be considered thoroughly before implementation of AI technologies in the organization, says Cache Merrill, founder of Zibtek, a software development company.Merrill notes the following as specific costs to look and budget for:Installation costs: Running the fine-tuned or proprietary LLMs may require NVIDIA A100 or H100 Graphics Processing Units which can cost $25,000+. In contrast, enterprise-grade cloud computing services costs between $5,000 - $15,000 for consistent usage on its own.Model fine-tuning: The construction of a custom model LLM can cost tens of thousands of dollars or more based on the various parameters of the dataset and constructional aspects.Software maintenance: With regular updates of models this software will also require security checks and compliance as well as increasing cost at each scale, which is usually neglected at the initial stages of the project.Human oversight: Employing experts in a particular field to review and advise LLM results is becoming more common, which adds to the employees wage payout.Some of the aforementioned costs are reduced by the use of SLMs but some are not, or not significantly so. But given that many organizations use both large and small models, and/or an assortment of model types, its fair to say that AI isnt cheap, and we havent yet touched on energy and environmental costs. The best advice is to first establish solid use cases and choose models that precisely fit the tasks and a solid lead towards the ROI youre aiming for.SLM, LLM, and Hybrid ExamplesIf youre unsure of or have yet experimented with -- small language models, here are a few examples to give you a starting point.Horenshtien says SLM examples on her list include Mistral 7B, LLaMa 3, Phi 3, and Gemma. Top LLMs on her list are GPT-4, Claude 3.5, Falcon, Gemini, and Command R.Examples of SLM vs LLM use cases in the real-world that Horenshtien says her company sees include:In manufacturing, SLMs can predict equipment failures, while LLMs provide real-time insights from IoT data.In retail, SLMs personalize recommendations; LLMs power virtual shopping assistants.In healthcare, SLMs classify records, while LLMs summarize medical research for clinicians.Meanwhile, Eloshvili says that some of the more solid and affordable versions [of SLMs and other LLM alternatives], in my opinion, would include Google Nano, Meta Llama 3 Small, Mistral 7B and Microsoft Phi-3 Mini."But everyone understandably has their own list of SLMs based on varying criteria of importance to the beholder.For example, Joseph Regensburger, vice president of research at Immuta, says some cost-efficient SLM options include GPT-4o-mini, Gemini-flash, AWS Titan Text Lite, and Titan Text Express.""We use both LLMs and SLMs. The choice between these two models is use-case-specific. We have found SLMs are sufficiently effective for a number of traditional natural language processing tasks, such as sentence analysis. SLMs tend to handle the ambiguities inherent in language better than rule-based NLP approaches, at the same time offering a more cost-effective solution than LLMs. We have found that we need LLMs for tasks involving logical inference, text generation, or complex translation tasks," Regensburger explains.Rogers Jeffrey Leo John urges companies to consider SLM open-source models too. If you are looking for small LLMs for your task, here are some good open- source/open-weight models to start with: Mistral 7B, Microsoft Phi, Falcon 7B, Google Gemma, and LLama3 8B.And if youre looking for some novel approaches to SLMs or a few other alternatives, Anatolii Kasianov, CTO of My Drama, a vertical video platform for unique and original short dramas and films, recommends: DistilBERT, TinyBERT, ALBERT, GPT-Neo (smaller versions), and FastText.At the end of the day, the right LLM or SLM depends entirely on the needs of your projects or tasks. Its also prudent to remember that Generative AI doesnt have to be the hammer for every nail, says Sean Falconer, AI entrepreneur in residence at Confluent.Read more about:Cost of AIAbout the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·63 Views
-
Will Microsofts Majorana 1 Chip Hasten the Quantum Arms Race?www.informationweek.comShane Snider, Senior Writer, InformationWeekFebruary 19, 20254 Min ReadPhoto provided by MicrosoftTech giant Microsoft on Wednesday revealed its Majorana 1, a palm-sized topoconductor chip that uses a new type of matter as a conductor will allow quantum systems to solve incredibly complex problems.The development could pave the way for a new era of computing -- along with a new era of high-powered cyberattacks.Quantum computing uses quantum physics to solve problems that are too complex for classical bit-based computers. Businesses and governments have poured billions of dollars into quantum research, with companies and nations jockeying for the promised competitive advantage.Microsoft says its chip provides a path for quantum systems to access up to a million qubits, which would be enough power to deliver real-world solutions that could revolutionize industry, logistics, healthcare, and much more. The topoconductor, or topological superconductor, can create a new state of matter that is not solid, liquid or gas, Microsoft said in a blog post.Its one thing to discover a new state of matter, Chetan Nayak, Microsoft technical fellow, said in a statement. Its another to take advantage of it to rethink quantum computing at scale.The particles used in the chip, called Majoranas, do not exist in nature and had to be created with magnetic fields and superconductors. The exotic particles were first envisioned by Microsoft nearly 20 years ago to develop topological qubits that offered more stability and requiring less error correction. Microsoft says the resulting qubits are more stable and can be digitally controlled.Most of us grew up learning there are three main types of matter that matter: solid, liquid, and gas. Today, that changed, Microsoft CEO Satya Nadella said in a post on X. Imagine a chip that can fit in the palm of your hand yet is capable of solving problems that even all the computers on Earth today combined could not!The breakthrough required new materials made of indium arsenide and aluminum, much of which the company designed and produced atom by atom.Whatever youre doing in the quantum space needs to have a path to a million qubits, Nayak said. If it doesnt, youre going to hit a wall before you get to the scale at which you can solve the really important problems that motivate us. We have actually worked out a path to a million.What Does One Million Qubits Buy?Microsoft says the advancement may mean that practical uses for quantum computing may be reachable in years instead of decades. Those uses could pave the way for self-healing materials that can undo corrosion or cracks, allowing for safer airplane parts or more reliable bridge construction, for instance.Any company that makes anything could just design it perfectly the first time out, Matthias Troyer, a Microsoft technical fellow, said in a statement. The quantum computer teaches the AI the language of nature so the AI can just tell you the recipe for what you want to make.Quantum holds promises to unlock world-changing advancements in medicine, agriculture, machine learning, supply chain logistics, and much more. Aside from lofty, monumental advancements, businesses can look forward to more accessible and tangible benefits of quantum computing.Business philosopher, author and speaker Anders Indset says IT leaders should embrace the idea that quantum computing will soon be a big part of operations. CIOs should be asking, What is my future business model, what are the things I can do if I could apply that additional computing power, he tells InformationWeek in a phone interview. You can envision and anticipate these types of business models... I think we are now seeing the birth of the quantum economy.Race to Get Quantum-ReadyBut quantums potential will also be attractive to threat actors.Microsofts announcement adds to the excitement created by Google when it released its own new quantum chip last year, saying commercial quantum computing applications were only five years away. IBM has teased large-scale quantum computer availability by 2033.Cybersecurity experts warn that quantum could usher in a whole new era of threats. Quantum-powered attacks could easily crack todays most sophisticated cryptography protecting critical systems. The National Institute of Standards and Technology has called for the federal government to migrate systems to post-quantum cryptography by 2035.An accelerated timeline could prove challenging to secure critical systems. A worldwide effort is already underway to deploy post-quantum cryptography (PQC) that will defend against quantums ability to quickly hack systems.If quantum computers can crack todays encryption, does that mean all our passwords, bank accounts, and national security data are at risk? Not necessarily, writes Fabrizio Micucci, a consultant with SHI International, in a LinkedIn post. Governments and organizations worldwide are developing quantum-resistant encryption methods. Microsofts breakthrough accelerates the timeline -- meaning that businesses, financial institutions, and cybersecurity firms must now prepare.Indset agrees securing quantum is now an immediate need. "The whole field of security will be lead to heavy investments in storing and securing for post quantum cryptography," he says. "This type of announcement will obviously increase the drive and the need to secure many, many critical infrastructures for quantum."Micucci believes the transition to quantum-safe encryption needs to happen within the next decade.The quantum revolution is no longer a distant dream -- its happening now, Micucci writes.Read more about:Quantum ComputingAbout the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·54 Views
-
How to Make AI Projects Greener, Without the Greenwashingwww.informationweek.comSamuel Greengard, Contributing ReporterFebruary 19, 20255 Min ReadTithi Luadthong via Alamy StockFor businesses across every industry, artificial intelligence is rapidly reshuffling the deck. The technology opens the door to deeper insights, advanced automation, operational efficiencies, and cost savings.Yet, AI also delivers some baggage. The power-hungry nature of the technology, which impacts everything from data centers to training and using generative AI models, raises critical questions about sustainability. AI could double data center electricity demand in the US by 2030.As a result, business and IT leaders can easily find themselves caught in the crossfire between AIs benefits and risks. As organizations pursue carbon targets and other sustainability issues, a lack of clarity about the technology -- and perceptions of inconsistencies -- can evoke charges of greenwashing.Sustainable AI touches everything from the direct energy requirements that power artificial intelligence models to supply chain, reporting, hardware and data center operations. It can also raise questions about when and where organizations should use AI -- and when they shouldnt.Sustainable AI is about using AI in ways that minimize environmental impact while promoting sustainability throughout its lifecycle, says Sammy Lakshmanan, a partner at PwC Sustainability. The goal isnt to just reduce AIs footprint. Its to make AI both effective and sustainable.Related:Beyond the AI HypeA growing challenge for CIOs and other tech leaders is to fully understand the impact of AI, including GPUs that devour energy at about a 10x rate over other chips. While no company wants to miss the opportunities that AI can deliver, its also important to recognize that the technology comes with costs. Theres a temptation for organizations to get caught up in an AI arms race without looking at the returns, states Autumn Stanish, a director and analyst at Gartner.A haphazard or inconsistent approach to AI can contribute to the perception that a company is engaging in greenwashing. Many of the common uses of AI link directly to climate change, says David Rolnick, an assistant professor in the School of Computer Science at McGill University. Framing specific AI initiatives as net positives or negatives isnt the right approach, he argues. Its vital to gain a more holistic understanding of how AI impacts sustainability.Greenwashing problems often revolve around two key issues, Rolnick says. First, companies that use carbon offsets must recognize that they arent reducing emissions produced by AI systems. Second, sloppy reporting creates more questions than answers. While quantifying the carbon generated from AI is difficult -- especially Scope 3 emissions -- a lack of transparency increases the odds that a company will find itself in the crosshairs of activists and the media.Related:But theres also the fundamental question of how an organization uses AI, Rolnick says. Its important to put AI to work strategically. There are many places where it can improve efficiency -- particularly when it comes to automating processes and optimizing system -- but there also are many instances where it doesnt provide any significant advantages. This includes tossing generative AI at every problem. In many cases, humans make better decisions, he states.As companies pursue carbon reduction targets, its important to identify where delivers specific strategic advantages -- and how it impacts sustainability in both positive and negative ways. Sustainable AI does not happen by accident -- it involves proper governance and engineering to create systems that are efficient and beneficial for productivity and innovation, Lakshmanan explains.Cracking the CodeTurning an AI strategy into broader sustainability issues helps build an energy framework based in renewables -- including wind, solar and emerging sources of nuclear energy, such as small modular reactors (SMRs). While this approach doesnt directly lower the energy demand for AI, it can significantly curtail carbon output.Related:The challenge lies in verifying that energy labeled as sustainable or carbon free is genuinely renewable, Lakshmanan points out. As a result, he recommends that organizations adopt transparency tools such as renewable energy certificates (RECs) and Power Purchase Agreements (PPAs) that help track the lifecycle impacts of renewable infrastructure.There are also practical steps organizations that help align AI with sustainability initiatives. This includes improvements in data center efficiency, such as better hardware and understanding when CPUs are a better option than GPUs. It also involves responsible data practices such as optimizing AI algorithms and models through pruning and sampling, and with transfer learning, which can significantly decrease computational demands by recycling pre-trained models.Transfer learning involves using a model trained for one task to improve results for a related task.Training and inferencing models in a horizontal or cross-cutting manner can alleviate the need to repeat processes across departments and groups, Lakshmanan points out. For example, Summarizing documents is a repeatable process whether it relates to sustainability or tax documents. Theres no need to train the system twice for the same capability, he explains.The end goal, Lakshmanan says, is to adopt a holistic approach that spins a tight orbit around both innovation and the greater use of renewables. For instance, if an organization uses carbon offsets, he recommends pairing the program with a meaningful decarbonization strategy. This ensures offsets complement broader sustainability targets rather than replacing them. It makes AI projects both innovative and environmentally responsible.Beyond the AlgorithmAvoiding greenwashing accusations also requires sound carbon accounting practices that can measure and track AI emissions. A growing array of consulting firms and private companies offer tools to track AI emissions and optimize energy usage based on real-time grid conditions.Measurement, combined with deeper analysis of AI and data center energy consumption, can boost efficiency in other ways. There are ways to use AI to analyze and improve power consumption, including putting AI on the edge, says Gillian Crossen, Risk Advisory Principal and Global Technology Leader at Deloitte. Not everything has to go through the data center. AI can also right-size models and produce other insights and gains that offset its power requirements.Finally, its important to avoid over-marketing claims or publishing data that presents an unrealistically positive picture to the public and investors, says Thomas P. Lyon, Dow Professor of Sustainable Science, Technology and Commerce at the University of Michigans Ross School of Business. An organization must be able to fully substantiate its claims about AI and sustainability, typically through metrics and third-party verification.With transparency across key segments, including customers, investors, partners and employees, the risks of greenwashing subside. Organizations should step back and think about how they can use AI effectively, Rolnick says. There are legitimate and productive use cases but theres also a lot of energy waste associated with AI. Without a detailed assessment and a clear understanding of the various factors the risks increase.About the AuthorSamuel GreengardContributing ReporterSamuel Greengard writes about business, technology, and cybersecurity for numerous magazines and websites. He is author of the books "The Internet of Things" and "Virtual Reality" (MIT Press).See more from Samuel GreengardNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·50 Views
-
Quick Study: The IT Hiring/Talent Challengewww.informationweek.comJames M. Connolly, Contributing Editor and WriterFebruary 19, 20257 Min ReadInk Drop via Alamy StockSo, you told a friend that you need to hire more IT folks. The friend replied, "Hah, good luck!"Circumstances dealt IT leaders a challenging hand over the past few years, from the great resignation to executive demands for digital transformation, and onward to corporate fascination with artificial intelligence, hiring and keeping IT talent requires new strategies.There was no single cause of today's hiring challenges, and there's no single, easy answer short of hitting the lottery and retiring. However, contributors to InformationWeek have shared their experiences and advice to IT leaders on ways to staff up and skill up, all while staying under budget and keeping IT operational lights on.In this guide to todays IT hiring and talent challenges, we have compiled a collection of advice and news articles focused on finding, hiring and retaining IT talent. We hope it helps you succeed this year.A World of ChangeHelp Wanted: IT Hiring Trends in 2025ITs role is becoming more strategic. Increasingly, it is expected to drive business value as organizations focus on digital transformation.IT Security Hiring Must Adapt to Skills ShortagesDiverse recruitment strategies, expanded training, and incentivized development programs can all help organizations narrow the skills gap in an era of rapidly evolving threat landscapes.Top IT Skills and Certifications in 2025In 2025 top IT certifications in cloud security and data will offer high salaries as businesses prioritize multi-cloud and AI.How To Be Competitive in a Tight IT Employment MarketA slumping economy, emerging technologies, and over-hiring has led to a tight IT jobs market. Yet positions are still abundant for individuals possessing the right skills and attitude.The Soft Side of IT: How Non-Technical Skills Shape Career SuccessHeres why soft skills matter in IT careers and how to effectively highlight them on a resume. Show that you are a good human.Salary Report: IT in Choppy Economic Seas and Roaring Winds of ChangeLast year brought a sustained adrenaline rush for IT. Everything changed. Some of it with a whimper and some of it with a bang. Through it all IT pros held steady, but is it enough to sail safely through the end of 2024?Quick Study: The Future of Work Is HereThe workplace of the future isn't off in the future. It's been here for a few years -- even pre-pandemic.10 Unexpected, Under the Radar Predictions for 2025From looming energy shortages and forced AI confessions to the rising ranks of AI-faked employees and a glimmer of a new cyber-iron curtain, heres whats happening that may require you to change your companys course.Finding TalentAI Speeds IT Team HiringCan AI help your organization find top IT job candidates quickly and easily? A growing number of hiring experts are convinced it can.Skills-Based Hiring in IT: How to Do it RightBy focusing directly on skills instead of more subjective criteria, IT leaders can build highly capable teams. Here's what you need to know to get started.The Evolution of IT Job Interviews: Preparing for Skills-Based HiringThe traditional tech job interview process is undergoing a significant shift as companies increasingly focus on skills-based hiring and move away from the traditional emphasis of academic degrees.IT Careers: Does Skills-Based Hiring Really Work?More organizations are moving toward skills-based hiring and getting mixed results. Heres how to avoid some of the pitfalls.Jumping the IT Talent Gap: Cyber, Cloud, and Software DevsBusinesses must first determine where their IT skill sets need bolstering and then develop an upskilling strategy or focus on strategic new hires.Top Career Paths for New IT CandidatesMore organizations are moving from roles-based staffing to skills-based staffing. In IT, flexibility is key.Why IT Leaders Should Hire Veterans for Cybersecurity RolesMaintaining cybersecurity requires the effort of a team. Veterans are uniquely skilled to operate in this role and bring strengths that meet key industry needs.How to Find a Qualified IT Intern Among CandidatesIT organizations offering intern programs often find themselves swamped with applicants. Here's how to find the most knowledgeable and prepared candidates.The Search for Solid Hires Between AI Screening and GenAI ResumesDo AI-generated job applications gum up the recruitment process for hiring managers by filling inboxes with dubiously written CVs?3 Things You Should Look for When Hiring New GraduatesEach year, entry-level applicants in IT look a little different. Heres what you need to be looking for as the class of 2023 infiltrates the workforce.Why a College Degree is No Longer Necessary for IT SuccessWho needs student debt? A growing number of employers are hiring IT pros with little or no college experience.Recruiting TalentIn Global Contest for Tech Talent, US Skills Draw Top PayAfter several years of economic uncertainty and layoffs, US talent is once again attracting good pay in the global competition for tech skills. But gender disparity continues in many job categories.Hiring Hi-Tech Talent by Kickin It Old SchoolUsing elements of a traditional approach to recruiting IT professionals can attract and grow the modern workforce, but it's the soft skills shown during an interview that make a big difference.The Impact of AISkills on Hiring and Career AdvancementDemand is high for professionals with knowledge of AI, but do such talents really get implemented on the job?How to Channel a Worlds Fair Culture to Engage IT TalentEven the most well-funded and innovative companies will fail if they lack one thing: A diverse, united team. A CEO shares his experience and advice.Bridging IT Skills Gap in the Age of Digital TransformationInnovations in automation, cloud computing, big data analytics, and AI have not only changed the way businesses operate but have intensified the demand for specialized skills.5 Traits To Look for When Hiring Business and IT InnovatorsHiring resilient and forward-thinking employees is the cornerstone to innovation. If youre looking to hire a trailblazer, here are five traits to seek, as well as questions to ask.CIOs Can Build a Resilient IT Workforce with AI and Unconventional TalentAs the IT talent crunch continues, chief information officers can embrace new strategies to combine traditional IT staff with nontraditional workers and AI to augment the workforce.Pursuing Nontraditional IT Candidates: Methods to Expand Talent PipelinesEmployers winning in this labor market know how to look at adjacent skills and invest in upskilling their internal candidates while creating alternative candidate pools.Hiring with AI: Get It Right from the StartAs organizations increasingly adopt artificial intelligence in hiring, its essential that they understand how to use the technology to reduce bias rather than exacerbate it.Secrets to Hiring Top Tech TalentTo hire best-in-class IT talent, your company must have interesting technical problems to solve.Keeping TalentMeaningful Ways to Reward Your IT Team and Its AchievementsA job well done deserves a significant reward. Here's how to show appreciation to a diligent staff without busting your budget.Recognize the Contributions of Average IT PerformersEvery IT departmenthas its marginal performers. How do you get the most out of them?How to Manage a Rapidly Growing IT TeamMaintaining IT staff performance and efficiency during rapid growth requires careful planning and structure. Here's how to expand your team without missing a beat.Do Women IT Leaders Face a Glass Cliff?Are organizations more likely to promote women to top IT management posts during hopeless crisis situations? Apparently, yes.Skills Gap in Cloud Tools: Why It Exists and Ways to AddressAs enterprises shift to modernize applications, a companys most important asset is talent performance to back it up.Addressing the Skills Gap to Keep Up with the Evolution of the CloudAs cloud adoption increases, companies must focus on upskilling employees through continuous learning to maximize cloud and AI potential.The AI Skills Gap and How to Address ItWorkers are struggling to integrate AI into their skill sets. Where are we falling short in helping them leverage AI to their own benefit and the benefit of their employers?About the AuthorJames M. ConnollyContributing Editor and WriterJim Connolly is a versatile and experienced freelance technology journalist who has reported on IT trends for more than three decades. He was previouslyeditorial director of InformationWeek and Network Computing, where heoversaw the day-to-day planning and editing on the sites. He has written about enterprise computing, data analytics, the PC revolution, the evolution of the Internet, networking, IT management, and the ongoing shift to cloud-based services and mobility. He has covered breaking industry news and has led teams focused on product reviews and technology trends. He has concentrated on serving the information needs of IT decision-makers in large organizations and has worked with those managers to help them learn from their peers and share their experiences in implementing leading-edge technologies through such publications as Computerworld. Jim also has helped to launch a technology-focused startup, as one of the founding editors at TechTarget, and has served as editor of an established news organization focused on technology startups at MassHighTech.See more from James M. ConnollyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·66 Views
-
AI Upskilling: How to Train Your Employees to Be Better Prompt Engineerswww.informationweek.comLisa Morgan, Freelance WriterFebruary 19, 202510 Min ReadTithi Luadthong via Alamy StockGenerative AIs use has exploded across industries, helping people to write, code, brainstorm and more. While the interface couldnt be simpler -- just type some text in the box -- mastery of it involves continued use and constant iteration.GenAI is considered a game-changer, which is why enterprises want to scale it. While users have various resources available, like OpenAI and Gemini, proprietary LLMs and GenAI embedded in applications, companies want to ensure that employees are not compromising sensitive data.GenAIs unprecedented rate of adoption has inspired many individuals to seek training on their own, often online at sites such as Coursera, EdX, and Udemy, but employers shouldnt depend on that. Given the strategic nature of the technology, companies should invest in training for their employees.A Fast Track To Improving Prompt Engineering EfficacyAndreas Welsch, founder and chief AI strategist at boutique AI strategy consultancy Intelligence Briefing, advocates starting with a Community of Multipliers -- early tech adopters who are eager to learn about the latest technology and how to make it useful. These multipliers can teach others in their departments, helping leadership scale the training. Next, he suggests piloting training formats in one business area, gathering feedback and iterating on the concept and delivery. Then, roll it out to the entire organization to maximize utility and impact.Related:Despite ChatGPT being available for two years, Generative AI tools are still a new type of application for most business users, says Welsch. Prompt engineering training should inspire learners to think and dream big.He also believes different kinds of learning environments benefit different types of users. For example, cohort-based online sessions have proven successful for introductory levels of AI literacy while executive training expands the scope from basic prompting to GenAI products.Advanced training is best conducted in a workshop because the content requires more context and interaction, and the value comes from networking with others and having access to an expert trainer. Advanced training goes deeper into the fundamentals including LLMs, retrieval-augmented generation, vector databases and security risks, for example.Andreas Welsch, Intelligence BriefingFunction-specific, tailored workshops and trainings can provide additional level of relevance to learners when the content and examples are put into the audience's context, for example, using GenAI in marketing, says Welsch. Prompting is an important skill to learn at this early stage of GenAI maturity.Related:Digital agency Create & Grow, initiated its prompt engineering training with a focus on the basics of generative AI and its applications. Recognizing the diverse skill levels within its team, the company implemented stratified training sessions, beginning with foundational concepts for novices and advancing to complex techniques for experienced members.This approach ensures that each team member receives the appropriate level of training, maximizing learning efficiency and application effectiveness, says Georgi Todorov, founder and CEO of Create & Grow, in an email interview. Our AI specialists, in collaboration with the HR department, lead the training initiatives. This dual leadership ensures that the technical depth of AI is well-integrated with our overarching employee training programs, aligning with broader company goals and individual development plans.The companys training covers:The basics of AI and language modelsPrinciples of prompt design and response analysisUse cases specific to its industry and client requirementsEthical considerations and best practices in AI usageEducational resources including online courses, in-person workshops, and peer-led sessions, and use of resources from leading AI platforms and collaborations with AI experts that keeps training up-to-date and relevantRelated:To gauge individuals level of prompt engineering mastery, Create & Grow conducts regular assessments and chooses practical projects that reflect real-world scenarios. These assessments help the company tailor ongoing training and provide targeted support where needed.Its crucial to foster a culture of continuous learning and curiosity. Encouraging team members to experiment with AI tools and share their findings helps demystify the technology and integrate it more deeply into everyday workflows, says Todorov. Our commitment to developing prompt engineering expertise is not just about staying competitive; its about harnessing the full potential of AI to innovate and improve our client offerings.A Different TakeKelwin Fernandes, cofounder and CEO at AI strategy consulting firm NILG.AI says good prompts are not ambiguous.A quick way to improve prompts is to ask the AI model if there's any ambiguity in the prompt. Then, adjust it accordingly, says Fernandes in an email interview.His company defined a basic six-part template for efficient prompting that covers:The role the AI should play (e.g., summarizing, drafting, etc.)The human role or position the AI should imitateA description of the task, being specific and removing any ambiguityA negative prompt stating what the AI cannot do. (E.g., dont answer if youre unsure)Any context you have that the AI doesnt know (E.g., information about the company)The specific task details the AI should solve at this time.[W]e do sharing sessions and role plays where team members bring their prompts, with examples that worked and examples that didn't and we brainstorm how to improve them, says Fernandes.At video production company Bonfire Labs, prompt training includes a communal think tank on Google Chat, making knowledge accessible to all. The company also holds staff meetings in which different departments learn foundational skills, such as prompt structure or tool identification.This ensures we are constantly cross-skilling and upskilling our people to stay ahead of the game. Our head of emerging technologies also plays an integral role in training and any creative process that requires AI, further improving our best practices, says Jim Bartel, partner, managing director at Bonfire Labs in an email interview. We have found that the best people to spearhead prompt training are those who are already masters at what they do, such as our designers and VFX artists. Their expertise in refinement and attention to detail is perfect for prompting.Why Developers May Have an EdgeEdward Tian, CEO at GPTZero believes prompt engineering begins with gaining an understanding of the various language models, including ChatGPT, GPT-2, GPT-3, GPT-4, and LLaMA.Its also important to have a background in coding and an understanding of NLP, but people often have minimal knowledge about the different language models, says Tian. Understanding how their learning concepts work and how they are structured can help significantly with prompt engineering. Working with pre-trained models can also help prompt engineers really hone their skills and [gain] a further understanding of how it all works.Chris Beavis, partner and AI Specialist at design-led consultancy The Frameworks suggests using the OpenAI development portal versus ChatGPT or Gemini, for example.It offers a greater level of control and access to different models. The temperature of a model is particularly important, allowing you to flex the randomness [and] creativity of answers over determinism [or] repeatability, says Beavis in an email interview.Chris Beavis, The Frameworks[The user] should start by identifying an idea or a challenge they are facing to see what impact AI can have. Try out different approaches, remember to give specific instructions, provide examples, and be clear about the format of the result you are expecting. Some other tips include breaking problems down into steps, including relevant data sets for context and prompting the AI to ask you questions about your request if its not clear.Most employees are experimenting with AI at The Frameworks in different ways, from image generation and summarization to more advanced techniques like augmented information retrieval and model training.I certainly think there is an initial barrier to overcome [when] familiarizing yourself with how to prompt, which may suggest the need for a beginner level of training. Beyond that, I think its a learning journey that will depend on your area of interest. A developer may want to explore how to connect AI prompting to data sets via APIs, copywriters may want to use it for brainstorming or drafting and strategists may want to use it to interrogate complex data sets. Its a digital literacy question.His company is finding the most useful applications are where they use code to combine prompts with data sets, like mail merging. That way, AI can be treated as a step in a repeatable problem-solving process.As with most companies, we started by simply seeing what the technology could do, says Beavis. As we become more familiar with the capabilities, we are finding interesting uses within client projects and our own internal processes.Intelligence Briefings Welsch says for software developers, mastery is a cost function such as getting the optimal output with the shortest possible prompt (to consume the least amount of tokens). For business users, he says proficiency could be measured by awareness of common prompting techniques and frameworks.Prompting is often portrayed as a glorified science. While teaching techniques is a good start for laying a foundation, Generative AI requires users to think differently and use software differently, says Welsch. [Trainees] can learn about examples of what these tools can be used for, but it is experimenting and iterating over an open-ended conversation that they should take away from it.Engage Specialized TrainersBrendan Gutierrez McDonnell, a partner at K&L Gates in the law firm's AI solutions group, says his company uses a multifaced approach to prompt engineering training.We have relied on experiential training provider AltaClaros prompt engineering course as an introduction for our lawyers and allied professionals to the world of prompt engineering. We have supplemented that foundational training with prompt engineering courses tailored to the GenAI [and other] AI solutions that our firm has licensed, says McDonnell in an email interview. These more tailored programs have been conducted in tandem by the vendor providing the solution and by our internal community of power users familiar with the specific solution.At present, the firm is building its own internal database of prompt engineering questions that work well with the various GenAI solutions. Over time, he expects the solutions themselves will recommend the best prompt engineering guidance to solve a particular problem.The best way to develop a degree of mastery is through education from outside educational vendors like AltaClaro, solution vendors like Thomson Reuters, and by learning from your colleagues, says McDonnell. Prompt engineering is best approached as a team sport. Most importantly, you must dive in and use the program. Be creative and push your own limits and the programs limits.Brendan Gutierrez McDonnell, K&L GatesK&L Gates has training programs for beginners that cover the basics and nuanced programs for advanced users, but before jumping into prompt engineering, he believes the user should have a fundamental understanding of how a GenAI solution works and whether the information input into the program will remain confidential or not.The user [should] understand that the output needs to be verified as large language models can make mistakes. Finally, the user needs to know how to vet the output. Once the user has these basics in order, she or he can start to learn how to prompt, says McDonnell. The user should be given problems to solve so that the user can put his or her prompting to the test and then review the results with peers. Having a training partner like AltaClaro can make sure that the training experience is effective, as they are experts in building programs tailored to the way lawyers learn best.Bottom LineOrganizations are approaching GenAI training differently, but they tend to agree its necessary to jumpstart better prompting.Where to get that training varies, and the sources are not mutually exclusive. One can hire expert help on-site, create their own programs and invest in GenAI online courses depending on the level of existing knowledge and the need to provide training that advances GenAI proficiency at varying levels of mastery.Read more about:Cost of AIAbout the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·64 Views
-
Key Ways to Measure AI Project ROIwww.informationweek.comJohn Edwards, Technology Journalist & AuthorFebruary 18, 20257 Min ReadTithi Luadthong via Alamy StockBusinesses of all types and sizes are launching AI projects, fearing that failing to embrace the powerful new technology will place them at a competitive disadvantage. Yet in their haste to jump on the AI bandwagon, many enterprises fail to consider one critical point: Will the project meet its expected efficiency or profitability goal?Enterprises should consider several criteria to assess the ROI of individual AI projects, including alignment with strategic business goals, potential cost savings, revenue generation, and improvements in operational efficiencies, says Munir Hafez, senior vice president and CIO with credit monitoring firm TransUnion, in an email interview.Besides relying on the standard criteria used for typical software projects -- such as scalability, technology sustainability, and talent -- AI projects must also account for the costs associated with maintaining accuracy and handling model drift over time, says Narendra Narukulla, vice president, Quant analytics, at JPMorganChase.In an online interview, Narukulla points to the example of a retailer deploying a forecasting model designed to predict sales for a specific clothing brand. "After three months, the retailer notices that sales haven't increased and has launched a new sub-brand targeting Gen Z customers instead of millennials," he says. To improve the AI model's performance, an extra variable could be added to account for the new generation of customers purchasing at the store.Related:Effective ApproachesAssessing an AI project's ROI should start by ensuring that the initiative aligns with core business objectives. "Whether the goal is operational efficiency, enhanced customer engagement, or new revenue streams, the project must clearly tie into the organizations strategic priorities," says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte, in an online interview.David Lindenbaum, head of Accenture Federal Services' GenAI center of excellence, recommends starting with a business assessment to identify and understand the AI project's end-user as well as the initiative's desired effect. "This will help refocus from a pure technical implementation into business impact," he says via email. Lindenbaum also advises continued AI project evaluation, focusing on a custom test case that will allow developers to accurately measure success and quantitively understand how well the system is operating at any given time.Ammanath believes that a comprehensive cost-benefit analysis is also essential, balancing tangible outcomes such as increased productivity with intangible ones, like improved customer satisfaction or brand perception. "Scalability and sustainability should be central considerations to ensure that AI initiatives deliver long-term value and can grow with organizational needs," she says. "Additionally, a robust risk management framework is vital to address challenges related to data quality, privacy, and ethical concerns, ensuring that projects are both resilient and adaptable."Related:Metrics MatterPotential project ROI can be measured with metrics, including projected cost savings, expected revenue increases, hours of productivity saved, and anticipated improvements in key performance indicators (KPIs) such as customer satisfaction scores, Hafez says. Additionally, metrics such as time-to-market for new products or services, as well as any expected reduction in bugs or vulnerabilities revealed by a tool such as Amazon Q Developer, can provide insights into an AI project's potential benefits.Leaders need to look past the technology to determine how investing in generative AI aligns with their overall strategy, Ammanath says. She notes that the metrics required to measure AI project ROI vary, depending on the implementation stage. For example, to measure the potential ROI, organizations should evaluate projected efficiency gains, estimated revenue growth, and strategic benefits, like improved customer loyalty or reduced downtime. "These forward-looking metrics offer insights into the initiatives promise and help leaders determine if they align with the business goals." Additionally, for current ROI, leaders should consider using metrics that look at realized outcomes, such as actual cost savings, revenue increases tied directly to AI initiatives, and improvements in key performance indicators like customer satisfaction or throughput.Related:Pulling the PlugIf an AI project consistently fails to meet expectations, terminate it in a calculated manner, Hafez recommends. "Document the lessons learned and the reasons for failure, reallocate resources to more promising initiatives, and leverage the knowledge gained to improve future projects."Once a decision has been made to end a project, yet prior to officially announcing the ventures termination, Narukulla advises identifying alternative projects or roles for the now-idled AI team talent. "In light of the ongoing shortage of skilled professionals, ensuring a smooth transition for the team to new initiatives should be a priority," he says.Narukulla adds that capturing key learnings from the terminated project should be a priority. "A thorough post-mortem analysis should be conducted to assess which strategies were successful, which aspects fell short, and what improvements can be made for future endeavors."Narukulla believes that thoroughly documenting post-mortem insights can be invaluable for future reference. "By the time a similar issue arises, new models and additional data sources may offer innovative solutions," he explains. At that point, the project may be revived in a new and useful form.Parting ThoughtsEstablishing a strong governance framework for all ongoing AI projects is essential, Hafez says. "Further, a strong partnership with legal, compliance, and privacy teams can enhance success, particularly in regulated industries." He also suggests collaborating with external partners. "Leveraging their expertise can provide valuable insights and accelerate the AI journey."When implemented and scaled properly, AI is far more than a technological tool; it's a strategic enabler of innovation and competitive advantage, Ammanath says. However, long-term success requires more than sophisticated algorithms -- it demands cultural transformation, emphasizing human collaboration, agility, and ethical foresight, she warns. "Organizations that thrive with AI establish clear governance frameworks, align business and technical teams, and prioritize long-term value creation over short-term gains."As AI continues to advance and evolve, IT leaders have an unprecedented opportunity to align investments with enterprise-wide goals, Ammanath says. "By approaching AI as a strategic lever rather than a standalone solution, organizations can position themselves at the forefront of innovation and value creation."Read more about:Cost of AIAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·39 Views
-
The Cost of AI: How Can We Adopt and Deliver AI Efficiently?www.informationweek.comTechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.The Cost of AI: How Can We Adopt and Deliver AI Efficiently?The Cost of AI: How Can We Adopt and Deliver AI Efficiently?What mindset do enterprises need to adopt in order to make AI happen without breaking the bank and see tangible results?Joao-Pierre S. Ruth, Senior EditorFebruary 18, 2025Which should come first, the plan to adopt AI or an assessment of the available resources an enterprise has to support AI? Is it better to develop AI in-house or turn to third parties? What third-party resources should enterprises look to in order to deliver on their AI plans?In the final week of "The Cost of AI" series, the focus shifts to practical ideas to advance plans for AI.Organizations might feel compelled to acquire top-tier AI resources or search for only the most elite AI professionals to enact their strategies for AI, but that might not make efficient use of an enterprises actual resources. It might not even be realistic.How should companies structure their AI strategies in order to deliver positive ROI? How should short- and long-term plans be mapped out?What can companies do to stay on budget when pursuing AI? How can they determine a rational budget for the scope of their plans? What happens if they realize they cannot achieve their goals within that budget?In this episode of DOS Won't Hunt, Fred Sala, chief scientist at Snorkel AI; Becky Carroll, partner, IBM Consulting Global - AWS strategic partnership lead for data and AI; Charles Xie, CEO and founder of Zilliz; Srujan Akula, CEO of The Modern Data Company; and Deepak Singh, vice president of developer experience at AWS, discussed these and other questions to bring some clarity and efficiency to AI strategies.Read more about:Cost of AIAbout the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·41 Views
-
How to Create a Sound Data Governance Strategywww.informationweek.comLisa Morgan, Freelance WriterFebruary 18, 20257 Min ReadZoonar GmbH via Alamy StockData governance is still a challenge at some organizations. Operational silos among departments, a lack of strong data governance leadership, and the ever-increasing glut of data are factors, as is a failure to understand the universe of data.Some elements of sound data governance involve understanding the various data types that you may have. Once you understand the types, you can start looking at the sensitivity level of the data, classify that information, and determine how best to protect and manage it, says Erich Barlow, head of information security - Americas | global IT security professional at standards and certification organization BSI Americas. Having a clear vision of the business context for collecting and storing data is also an element of a sound governance strategy. Developing this vision allows information to be governed in a manner consistent with current established standards and regulatory requirements, while also providing an outline of a potential data governance strategy.A sound data governance strategy needs to address the ethical use of data -- to avoid issues such as illegal discrimination -- or face consequences, such as fines, potential litigation, and reputational harm."Establishing data ownership and stewardship is also a crucial element of a sound data governance strategy, says Barlow. This process helps improve data control and ensures that only reliable sources are used and managed.Related:Arunkumar Thirunagalingam, senior manager data and technical operations at healthcare company McKesson, says to empower data sovereignty, organizations should formulate policies that include data definitions, access rights and data protection, compliance processes, data quality standards and performance metrics to ensure adherence to standards. Organizations should also be able to adapt to new regulations and business needs.Common Challenges Organizations FaceAdam Ennamli, chief risk andsecurity officer at General Bank of Canada, says approaching data governance as an IT or compliance initiative is the most fundamental mistake organizations make.[A] sound data governance strategy must be driven by the business, for the business, focusing on how data acts as an input to decisions, says Ennamli. If governance is pushed down as a technology, security or regulatory project, the business users may see it as a burden rather than a value driver, says Ennamli.Another mistake is trying to do too much at once.Too many organizations try to govern all their data at once, creating elaborate frameworks-on-slides that look impressive on paper but wont last a week in execution, says Ennamli. Instead, pick a critical business product or process, which can then lend you their influence across their organization, establish governance there, show tangible benefits and then expand.Related:Governance is also an ongoing process, not an event.[G]overnance isnt a project with an end date. Its an ongoing hygiene exercise that requires continuous attention and focus, says Ennamli. You dont have to build an army if you did the initial work right, just a diverse team of experts that understand the business dynamics and have foundational data knowledge.McKessons Thirunagalingam warns that its also possible to imagine starting from the wrong end, having ignored the needs of certain key stakeholders until late in the game. The result of that is resistance to the adoption of solution and misaligned policies for the governance of the business with its operational requirements.Do a bit and then build up. Make things simple at first [to] quickly deliver business value, such as increasing data accuracy or [enabling] more effective compliance, says Thirunagalingam. Promote accountability by embedding governance into business outcomes and encouraging ownership of data stewardship to all employees.Related:BSI Americass Barlow says some organizations dont understand how much data they possess, which can hamper the implementation of an effective data management program. Similarly, they may not fully grasp what regulations they must comply with or what data is specifically collected.This is especially true if the data collected is metadata from websites or applications. Some of this information is under regulatory control, so the business may need to apply additional control measures to comply with these requirements, says Barlow. Another challenge is finding the proper framework to fit the needs of the business and that of its clients and customers. Many standards exist, but some standards are suggestive and provide guidelines, while others are prescriptive, state-specific requirements that need to be adopted. This, in turn, means that the control measures required by a specific standard may be costly for the business to implement.When organizations arent aware of the data they possess and what controls may be required to comply with a given standard. The misunderstanding then snowballs into an ineffective program that does not meet the business's needs. It also puts the data at risk since the control measures are ineffective."Organizations in highly regulated industries such as healthcare and finance, dont have a good handle on the data they collect, says Barlow. Typically, the organization will collect an overabundance of data that is not needed for their services, [and] because they collect it, they must manage it. Some of these businesses are unaware that this information can be sensitive and require specialized care such as [at]-rest or [in]-transit encryption, so they spend more than budgeted.Who Should Spearhead Data GovernanceMany different types of roles are assigned to head governance because organizations approach it differently. It could be the head of compliance or privacy, the CISO, an existing risk function, the CIO or CTO, or another role. BSI Americas Barlow believes CISOs are the best choice.Information security officers are well placed in many organizations to address specific issues that may arise in handling or storing data, says Barlow. Additionally, InfoSec teams can help organizations understand the various requirements pertaining to their businesss data. The Information security team will have hands-on knowledge of how to implement the various security measures required by specific data management standards.If organizations have a data security officer or a data protection officer, they too should be involved in developing the methodology and management of data because they understand the complexity of the data and how to adhere to various international standards and local regulations. He also recommends having the legal team involved since litigation is the reason why some companies developed data management standards when they did.General Bank of Canadas Ennamli says while voluntary or designated data stewards are a decent idea on paper, it rarely workout out due to competing priorities and loyalties.You want dedicated, focused people that will look at the data, the processes, the operations, and build critical bridges between technology assets, informational assets, and business value units, translating requirements and emerging a clear, pragmatic mapping in both directions, says Ennamli.McKessons Thirunagalingam says strong data governance leadership comes solely from the chief data officers, and similar high-level executive sponsors expected to ensure cross-departmental collaboration.The person guarantees that data governance strategy is implemented towards the business goals and that top management endorses the strategy, says Thirunagalingam. Collaboration is of the utmost importance -- businesspeople, IT teams, data stewards, lawyers, etc., all are essential. There is a governance committee for which members are recommended on a cross functional basis to ensure policies are holistic in terms of addressing and meeting technical, legal, operational and other organization objectives.Tips for SuccessGiven the ever-increasing reliance on data for analytics, AI, and to inform business strategy, organizations that have not yet defined and implemented a data governance strategy should do so now.Taking control of your data will be crucial for when businesses begin developing or utilizing new and emerging data-driven technologies like AI and quantum computing, says BSIs Barlow. Addressing security issues early on will also help to ensure the information is available for use by emerging technologies in the future. Taking control of your data and addressing security issues will benefit both your business and customers, so the information must be accurate and readily available to be included in various models and training algorithms.General Bank of Canadas Ennamli underscores the need for simplicity.The most successful governance tip is to focus on making governance digestible, meaning, practical, jargon-free and useful for end users, says Ennamli. The minute governance becomes an obstacle to getting value creation work done, people will inevitably find ways around it, so be pragmatic and realistic in your approach.And dont forget the importance of cross-functional collaboration. Without strong data governance leadership and the right people involved, organizations risk inadvertent use or outright exploitation of data in a manner thats harmful to the organization and its stakeholders.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·53 Views
-
SolarWinds CEO on $4.4B Acquisition, Calming Uncertainty, and Securing the Futurewww.informationweek.comShane Snider, Senior Writer, InformationWeekFebruary 13, 20253 Min ReadSOPA Images Limited via Alamy StockObservability and IT management software company SolarWinds shocked the IT community with last weeks announcement that it would go private after a $4.4 billion acquisition by private equity firm Turn/River Capital.News of 2025s biggest-so-far technology deal was met with some skepticism by analysts and cybersecurity experts, who worry about the Turn/Rivers long-term plans and security implications. SolarWinds was the victim of a historic nation-state cyberattack that rocked the industry in 2021. Security experts worry what a transition to a privately held organization will mean for transparency going forward.Will Townsend, vice president and principal analyst at Moor Insights & Strategy took to X to suggest that the companys massive hack led to the sale. Going private through a PE is no surprise, he wrote. [SolarWinds] never did enough to reassure investors and customers that it had learned and implemented measures to prevent that epic supply chain hack from happening again.But in a live interview with InformationWeek, SolarWinds CEO Sudhakar Ramakrishna says its the companys success even after the attack that drove Turn/Rivers acquisition play that led to the blockbuster deal. SolarWinds most recent financial report shows $200 million in revenue for the third quarter of 2024, a 6% year-over-year increase.Related:Probably the most significant reason why Turn/River was attracted to us is the fact that weve continued making progress on the SolarWinds platform and continue to make progress on every metric from a business standpoint when outside investors look at us, theyre obviously looking at the business trajectory, which is unquestionable at this point, he says.SolarWinds customers and partners should look forward to continued growth, Ramakrishna says.Customers should expect us to ramp faster innovations on our SolarWinds platform with our focus on time to value, time to remediate, and time to resolve, we are making good progress organically on all three of those dimensions and well be accelerating that progress.He says the company will also be making improvements to packaging and pricing. Customers should experience and expect everything from us that they have come to know and like about us. Hopefully, they should get more from us in terms of how we give them solutions that accelerate their business transformation.Questions of SecurityHow SolarWinds handles security going forward with the transition to a private entity will be watched closely by the cybersecurity industry. Brian Fox, co-founder and CTO of software supply chain management firm Sonatype, says the SolarWinds attack exposed the level of attacks on critical supply chains.Related:The SolarWinds hack perfectly showcased the rise of sophisticated software supply chain attacks, as it compromised high-profile networks, including those of nine US government agencies, Fox says in an email interview. As SolarWinds charts a new path forward, I can only hope that lessons learned would not be forgotten amid the organizational change.SolarWinds Ramakrishna says the company wants to allay those security concerns. I think its a well-placed fear But as I engage with the Turn/River team, one of my important emphases was on secure-by-design and the initiatives that I started back in 2021, he says. Theres a need for us to continue to help ensure transparency with customers, which then obviously leads to trust. I dont expect that to change.The acquisition will cost Turn/River $18.50 per share in an all-case deal. The purchase price represents a 35% premium on SolarWinds 90-day average stock price at the time of the deal.There is broad excitement, Ramakrishna says. People within the company view this as a great validation for their work. The team has worked super hard to get to this point, but we also realize that our jobs are never done. We just have to keep earning the trust of our customers and our partners on a daily basis. Its business as usual for us.Related:The deal still needs regulatory approval and is expected to close in the second quarter. Investment firms Thoma Bravo and Silver Lake, who hold 65% of the outstanding voting securities, approved the acquisition along with SolarWinds board of directors.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·86 Views
-
Possibilities with AI: Lessons From the Paris AI Summitwww.informationweek.comTechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Possibilities with AI: Lessons From the Paris AI SummitPossibilities with AI: Lessons From the Paris AI SummitWorld leaders gathered for a global event that reflected the hunger for AI innovation and competition.Carrie Pallardy, Contributing ReporterFebruary 13, 20254 Min ReadArc de Triomphe, landmark in Paris, FranceImages-Europa via Alamy Stock PhotoThe AI Action Summit held in Paris on Feb. 10 and Feb. 11 focused more on the possibilities than the perils of AI. French President Emmanuel Macron kicked off the event with a series of deepfaked videos of himself, seemingly more amused than concerned.People -- government leaders, tech executives, academics, and researchers among them -- from more than 100 countries flocked to the event to talk about AI innovation, governance, public interest, trustworthiness, and its impact on the future of work.InformationWeek spoke to three experts who attended the event to get a sense of some of the major themes that emerged from the third global AI summit.Global Competition and TensionWhile the AI Action Summit brought together people from around the world, a sense of competition remained strong. Macron urged Europe to take a more innovative stance in hopes of being of player in the AI race being run by China and the US.US Vice President JD Vance took to the stage at the summit to declare that the US would be the dominant player in the AI space.Georges-Olivier Reymond, cofounder and CEO of quantum computing company Pasqal, tells InformationWeek that hardware was a key discussion point at the summit. The US, for example, placed restrictions on AI chip exports.Related:Control the hardware, you have your sovereignty. And for me, that is one of the main takeaways of this event, Reymond tells InformationWeek.While Vance gave voice to the America First approach to AI, the US is still facing stiff competition. Earlier this year, DeepSeek burst onto the scene, seemingly giving China an edge in the global race for AI dominance. The companys founder Liang Wenfeng did not attend the summit, but other stakeholders from China did. Chinese Vice Premier Zhang Guoqing spoke about a willingness to work with other countries on AI, Reuters reports.Many countries in attendance, including France and China, signed an international agreement on inclusive and sustainable AI. But the US and UK are two notable holdouts, splintering hopes for a unified, global approach to AI.Innovation vs. RegulationIn 2023, the first global AI meeting was held in the UK. The second was held in Seoul, South Korea, last year. This year marks a shift away from the emphasis these two events put on safety.Going into the AI Summit in Paris, France wanted to demonstrate the concrete benefits of AI, as opposed to solely its potential risks, Michael Bradshaw, global applications, data, and AI practice leader at Kyndryl, an IT infrastructure services company, tells InformationWeek via email.Related:Vance was vocal about prioritizing innovation over safety. The AI future is not going to be won by hand-wringing about safety, he said, the New York Times reports. And Macron called for Europe to move faster.While innovation may be in the front seat, regulation still has a role to play if AI is to be safe and secure and actually deliver on the value it promises.My takeaways center on the opportunities we have to ensure that AI is deployed to benefit society broadly, Matthew Victor, co-founderof the Massachusetts Platform for Legislative Engagement (MAPLE), a platform that facilitates legislative testimony, tells InformationWeek via email. While the development of social media created an array of significant harms, we have an opportunity to ensure that AI technologies are deployed to drive economic opportunity and growth, while also strengthening our civic capacities and the resilience of our democracy.More Change AheadGiven the speed with which AI is moving, policymakers are hard pressed to keep up.Yet, I believe global policymakers, especially through constructive industry engagement and events like the AI Action Summit that present an opportunity for dialogue, are advancing with the best intentions on behalf of their public and economic interests, says Bradshaw.Related:What the change ahead looks like could be hard to predict, but there are areas to watch. For example, Reymond was invited to the summit to speak about quantum computing and AI. It's a clear signal that now AI and quantum are linked, and people recognize that, he says.Reymond anticipates that quantum could take a great leap forward in the next few years. It could be a moment two to three years away, and it will have the same impact that ChatGPT [did], he says. And I think that the [governments] should be ready.When the next global AI summit arrives, to be hosted in India, world leaders and technology stakeholders will be facing the same big questions about AI leadership, its value, and its safety but just how much the technology has changed by then and how it will reshape the answers to those questions remains to be seen.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·80 Views
-
How to Regulate AI Without Stifling Innovationwww.informationweek.comRegulation has quickly moved from a dry, backroom topic to front-page news, especially as technology continues to quickly reshape our world. With the UKs Technology Secretary Peter Kyle announcing plans to legislate AI risks this year, and similar being proposed for the US and beyond, how do we safeguard against the dangers of AI while allowing for innovation?The debate over AI regulation is intensifying globally. The EU's ambitious AI Act, often criticized for being too restrictive, has faced backlash from startups claiming it impedes their ability to innovate. Meanwhile, the Australian government is pressing ahead with landmark social media regulation and beginning to develop AI guardrails similar to those of the EU. In contrast, the US is grappling with a patchwork approach, with some voices, like Donald Trump, promising to roll back regulations to unleash innovation.This global regulatory patchwork highlights the need for balance. Regulating AI too loosely risks consequences such as biased systems, unchecked misinformation, and even safety hazards. But over-regulation can also stifle creativity and discourage investment.Striking the Right BalanceNavigating the complexities of AI regulation requires a collaborative effort between regulators and businesses. Its a bit like walking a tightrope: Lean too far one way, and you risk stifling innovation; lean too far the other, and you could compromise safety and trust.Related:The key is finding a balance that prioritizes the key principles.Risk-Based RegulationNot all AI is created equal, and neither is the risk it carries.A healthcare diagnostic tool or an autonomous vehicle clearly requires more robust oversight than, say, a recommendation engine for an online shop. The challenge is ensuring regulation matches the context and scale of potential harm. Stricter standards are essential for high-risk applications, but equally, we need to leave room for lower-risk innovations to thrive without unnecessary bureaucracy holding them back.We all agree that transparency is crucial to building trust and fairness in AI systems, but it shouldnt come at the cost of progress. AI development is hugely competitive and often these AI systems are difficult to monitor with most operating as a black box this raises concerns for regulators as being able to justify reasoning is at the core of establishing intent.As a result, in 2025 there will be an increased demand for explainable AI. As these systems are increasingly applied to fields like medicine or finance there is a greater need for it to demonstrate reasoning, why a bot recommended a particular treatment plan or made a specific trade is a necessary regulatory requirement while something that generates advertising copy likely does not require the same oversight. This will potentially create two lanes of regulation for AI depending on its risk profile. Clear delineation between use cases will support developers and improve confidence for investors and developers currently operating in a legal grey area.Related:Detailed documentation and explainability are vital, but theres a fine line between helpful transparency and paralyzing red tape. We need to make sure that businesses are clear on what they need to do to meet regulatory demands.Encouraging InnovationRegulation shouldnt be a barrier, especially for startups and small businesses.If compliance becomes too costly or complex, we risk leaving behind the very people driving the next wave of AI advancements. Public safety must be balanced, leaving room for experimentation or innovation.My advice? Dont be afraid to experiment. Try out AI in small, manageable ways to see how it fits into your organization. Start with a proof of concept to tackle a specific challenge -- this approach is a fantastic way to test the waters while keeping innovation both exciting and responsible.Related:AI doesnt care about borders, but regulation often does, and thats a problem. Divergent rules between countries create confusion for global businesses and leave loopholes for bad actors to exploit. To tackle this, international cooperation is vital, and we need a consistent global approach to prevent fragmentation and set clear standards everyone can follow.Embedding Ethics into AI DevelopmentEthics shouldnt be an afterthought. Instead of relying on audits after development, businesses should embed fairness, bias mitigation, and data ethics into the AI lifecycle right from the start. This proactive approach not only builds trust but also helps organizations self-regulate while meeting broader legal and ethical standards.Whats also clear is that the conversation must involve businesses, policymakers, technologists, and the public. Regulations must be co-designed with those at the forefront of AI innovation to ensure they are realistic, practical, and forward-looking.As the world grapples with this challenge, it's clear that regulation isnt a barrier to innovation -- its the foundation of trust. Without trust, the potential of AI risks being overshadowed by its dangers.0 Comments ·0 Shares ·79 Views
-
An AI Prompting Trick That Will Change Everything for Youwww.informationweek.comPam Baker, Contributing WriterFebruary 13, 20255 Min ReadZoonar GmbH via AlamyThis year comes packed with challenges from rising inflation and layoffs to fake job announcements, ungodly long job interviews, and hiring delays. One way to help you land a promotion, possibly avoid a layoff, or rise to the head of the line of job candidates is to improve your AI skills.To help you with that, here are several ways to use a phone picture as a prompt for AI models and apps like ChatGPT and Claude. Yes, phone pics can be used as prompts. Quick as a camera click, youre prompting like a pro!This information is drawn from my newest LinkedIn Learning online course Become a GenAI Power Prompter and Content Designer and my newest book Generative AI for Dummies, which was published last October.1. How to use a phone photo in an AI promptChatGPT and Claude will both allow you to attach files to your prompt. You do that by clicking on the paper clip icon beneath the prompt bar. Then select one or more files on your device that you want to attach to the prompt. In this case that will be the photo stored on your device that you want to include in your prompt.Most people think of attaching only text, CSV files, and spreadsheets to a prompt. Those can be very helpful too in getting great and highly targeted responses from AI. But few realize that these models can extract information from photos too.Related:Some of ChatGPT and Claudes competitors may be able to use photo data too, but for the purpose of illustrating this prompting tip, lets just stick to these two AI chatbots for now.2. What kind of phone pic makes a good prompt for AI?The short answer is that a photo of anything containing text about something you want to know more about or that contains information that you want the AI to build upon, is a good photo to use in a prompt.Choose a photo from your phones picture gallery and ask yourself what information does it contain that can be useful in a prompt for AI? Here are a few photo examples for you to consider what useful data they contain and what use that info may have for you. (Youll have to move on to other tips below for the answers. But do this exercise first).A phone pic of a slide that a keynote speaker is talking about in real timeA photo of a handwritten note you made on a napkin while chatting with other conference attendees about a business idea at the hotel bar one nightA photo of a page from a bookA photo of a broken machine part with information like model number, make, brand, etc.3. Pop-up info from a keynote speakers slide in real timeRelated:Speakers, good ones anyway, limit each of their slides to three or fewer bullet points. You might want to know more in order to follow the speakers presentation better. Take a quick phone pic of the slide on stage, attach it to the prompt bar in the ChatGPT mobile app and type your question or instruction in the prompt bar. An example is below. Voila! Instant popup information during the speakers speech!Example prompt: Extract the text from this pic and briefly explain the information in the second bullet point.4. From a handwritten note on a napkin to a bankable business planEvery seasoned pro knows they often get as much out of networking at a conference as they do from the presentations, speeches, and breakout sessions. Now you can get even more value from networking over lunch, at a mixer, or over drinks at the hotel bar.Suppose someone mentions an idea to you that you want to explore further, but you dont want to rudely pull out your phone to make yourself a note. Jot it down on a napkin, or whatever paper or material is handy. Yes, any handwritten note will do. Stick that note in your pocket. Later, perhaps back in your hotel room, use your phone to take a photo of your note. You can then attach it to a prompt for ChatGPT or Claude in a mobile or desktop app at your convenience.Related:Heres an example prompt to write along with that photo attachment: Build a business plan from the information you extract from the attached photo.5. Understand complex information by taking a picture of a page in a bookYouve heard quantum computing is a looming threat to cyber security and a serious boost to AI capabilities. Youve also heard year after year that *this* is the year quantum computing gets real. But you want to know more than the marketing hype. You want to know what quantum computing actually is and how far it has actually progressed.Take a picture or a screenshot from a scientific paper or a book and use it as an attachment to a prompt to get ChatGPT or Claude to translate complex information into terms you can better understand. Now you know what you need to know.6. Replace or fix a broken machine part at work using a phone pic and AISo here you are in a datacenter doing routine maintenance on hardware. You discover a loose or broken part on a cooling system or a server or something. Now you need to report it to whoever is in charge of ordering parts or vendor repair visits. But heck, youre not quite sure what to call that part or what info you need to request a replacement.Or maybe you are in your office and your desk chair sinks when you sit in it even after you raise it up again and again. Imagine that whatever machine or furniture or tool that youre working with or on poses a mystery to solve.Take a photo of it and prompt ChatGPT or Claude to identify it. This generally works best if the broken piece or the larger item has identifying text on it such as a brand name or make, a year, and/or a serial or part number on it. Take a photo that includes that information. If the AI cannot identify the part from a photo without text, try prompting it in text only to identify all parts of whatever the larger item is. You may be able to identify the part from the descriptions the Ai provides.About the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·82 Views
-
Power Struggles in Data Center Alley: Balancing Growth, Sustainability, and Costswww.informationweek.comThe enormous growth of data centers and the vast amounts of energy they consume is a global concern. Yet no place is more emblematic of this challenge than Northern Virginia. Its Data Center Alley is Ground Zero for questions about how to power the data requirements of the 21st century.About 70% of the worlds internet traffic flows through data centers in the region. Facilities accommodate firms such as Amazon Web Services (AWS), Google, Meta, Microsoft and Verizon. The region, particularly Northern Virginia and Loudoun County, has become an epicenter for high-speed connectivity and data storage.AI and data are becoming a bigger part of everyones lives, observes Suhas Subramanyam, an attorney and former member of the Virginia Senate who is an incoming representative for the states 10th US Congressional district. Were going to need to build more data centers but there are growing concerns about how the process takes place.As power demands escalate and new and larger facilities come online, one of the biggest questions is where all the needed energy will come from -- and how much carbon will it contain? Yet there are also worries about rate increases for consumers, environmental impacts and how these facilities are changing the character of neighborhoods.Related:Surging DemandThe need to dial up energy to data centers is indisputable. According to a September 2024 report from McKinsey & Company, power requirements for these facilities will triple by the end of the current decade. Today, data centers draw somewhere between 3% to 4% of total power but the figure will hit 12% by 2030, the consultancy noted. Overall electrical demand could swell by 27% by 2050, according to online data service Statistica.Power-hungry GPUs are increasingly the culprit. These chips -- critical for training and inferencing artificial intelligence models, including generative AI -- pull about 10 times the energy of CPUs. They are driving the need for larger and more power-intensive data centers, observes Gillian Crossan, risk advisory principal and global technology leader at Deloitte. This has implications for both power and water.Data centers arent the only culprit, however. Demand for electricity has continued to rise as heating and cooling systems have become electrified, electric vehicles (EVs) have steered into the mainstream, and manufacturing firms have adopted robots and other advanced digital systems, says Jeffrey Shields, senior manager for external communications at PJM Interconnection, a regional transmission organization that coordinates the movement of wholesale electricity in the eastern US.Related:In Northern Virginia, which benefits from its proximity to the government and business infrastructure of the nations capital, the strain of keeping up with rapidly growing power demand is mounting. A modern data center typically consumes as much energy as 80,000 households annually and pulls power at a rate of 10 to 50 times greater than the equivalent floor space of a commercial office building, according to the US Department of Energy.Current EventsIt isnt as simple as adding capacity to the grid. The enormous spike in electricity demand collides with Virginias commitment to move to sustainable power. The Virginia Clean Economy Act (VCEA) mandates 100% carbon-free electricity across the state by 2050. Dominion Energy, Virginias largest utility, is scrambling to meet these requirements by 2045. However, its 2024 Integrated Resource Plan raises questions about how it will achieve these goals (Dominion did not respond to multiple requests for an interview).To keep up with peak load demand, Dominion estimates that it will need to double its grid capacity over the next 10 to 20 years. The utility has proposed a broad energy portfolio that adds both conventional and sustainable energy sources. The planned upgrades -- including a series of natural gas plants -- will require billions of dollars and could lead to rate increases that could hit 50% by 2039. At present, bills are increasing at about 2.5% annually.Related:State regulators already added a $15 per month fuel surcharge in September 2024, though current utility rates are set until the end of 2025. After that, the average power bill of $202 per month could spike.Consumer groups are taking notice. There is going to be a reckoning, states Julie Bolthouse, director of land use for Piedmont Environmental Council, a 501(c)(3) organization. Its questionable whether the current energy model can continue to function effectively.In fact, Bolthouse believes there are more questions than answers. Dominion has signed contracts to supply energy without clear proof it can acquire the energy or build out the infrastructure, she says. In addition, the data centers are pushing land prices up and changing the character of region. They are encroaching on neighborhoods, parks and other infrastructure, Bolthouse adds.Power PlaysThe challenge of balancing power demands with sustainability goals isnt going to disappear anytime soon. PJM Interconnection, which oversees power transmissions for over 65 million people across 13 states and Washington, D.C., has recommended slowing the retirement of gas and coal facilities until other sources of energy can completely fill the gap.Of course, any delay in transitioning to low-carbon or no-carbon electricity could undermine reduction targets for Virginia as well as the companies operating data centers. Many of these firms have made commitments that are visible in ESG reports and other documents. Worse, it increases long-term risks related to climate change. On the other hand, PJM warns that retiring generation facilities before viable replacements are in place could result in a supply crunch.Amid all the wrangling over energy supply, companies operating data centers must also become more efficient, Deloittes Crossan says. This includes focusing on design and performance gains possible through the expanded use of immersion cooling, battery storage, on-site renewables and emerging technology such as small modular nuclear reactors, which deliver zero-carbon energy. At the same time, a move to collaborative land-use planning can help align development with community needs, Bolthouse says.Government officials and others must also reassess current policies, including tax incentives and subsidies, Subramanyam argues. We need to better understand the impact data centers have on communities. Its unclear if we can keep up with the energy demand because, in some cases, the concentration of data centers in one area is too high and we may not be able to protect rate payers, he says. No one disputes the need for these facilities, but we have to meet commitments to clean energy and the public.0 Comments ·0 Shares ·113 Views
-
Top Cybersecurity Trends That Will Impact This Yearwww.informationweek.comFrom COVID-19 to war in Ukraine, and more, the past five years have brought cybersecurity to mainstream attention.The US Department of Defense recently hosted an international exchange on shaping cybersecurity workforce, following the publication of its 2023 strategy to align the department's efforts to identify, recruit, develop, and retain a data-literate and technology-adept cyber workforce. These actions, among similar developments globally, provide insights into some of the challenges that CISOs and cybersecurity teams will face in the coming years.In practice, 2025 is likely to see growing importance of and demand for CISOs. The growing threat ofglobal and regional political instability, paired with the increasing capabilities of violent extremist organizations and crime groups seeking to cause harm, means that access to data will become a key component of global power for both state and non-state actors -- all of which will require greater vigilance from cyber teams.Another trend driving cyber threats is the technological arms race. Driven by advances in quantum computing and artificial intelligence, the race between cyber exploiters and victims has further intensified. Cybersecurity and AI are now bipartisan national security issues and crucial components of Americas competitive advantage. Simultaneously, increasing tools and incentives for cybercriminals and advanced persistent threats (APTs) will continue to raise the stakes for private sector firms. The rise of zero-day attacks only further highlights the evolving tactics of cyber adversaries, and CISOs must remain vigilant to protect their organizations.Related:This is set against a shift in current political landscape in the US, with the incoming administration potentially marking a significant change in the cybersecurity demands on firms as they seek to reduce red tape.Heres a look at the top cybersecurity trends that will shape 2025 and beyond.1. Navigating SEC cybersecurity disclosure rulesIn 2024, new SEC cybersecurity disclosure rules led to a significant increase in the public reporting of incidents. However, the often-vague nature of these disclosures and their limited detail on impact left investors seeking greater clarity.While the incoming administration may consider rescinding these requirements to reduce regulatory burdens, it is more likely that the current status quo will persist through 2025. CISOs should take a proactive approach by analyzing disclosures made in 2024 to understand how they were received and pre-plan the level of disclosure their organization is prepared to make. This will help mitigate risks and ensure transparency while complying with existing requirements.Related:2. Understanding AIs complex roleArtificial intelligence will remain a focal point for cybersecurity teams in 2025. AIs adversarial uses, as highlighted by the FBI at RSA in 2024, include creating undetectable malware, automating reconnaissance, and executing deepfake scams. Simultaneously, organizations are pursuing the AI dream to unlock significant business benefits, often without fully considering security implications.To ensure safe usage of AI technology, CISOs must engage at the planning stages of adoption to ensure security is integrated rather than treated as an afterthought. Boards now expect clear strategies to address AI-related risks, including sophisticated phishing and social engineering attacks enabled by AI.CISOs must balance fostering innovation with maintaining robust security measures. They can do this by investing heavily in protecting their digital systems, physical assets and workforce from adversaries. By implementing software solutions capable of detecting cyber threats, restricting access to buildings, and safeguarding sensitive employee information -- CISOs can take the necessary steps to fortify their defenses.Related:3. Strengthening security culture to mitigate human errorDespite technological advancements, human actions -- whether through unintentional errors or deliberate breaches -- remain a primary cause of security incidents. In fact, up to 95% of successful security attacks result from human error.As technical solutions alone are insufficient to protect organizations, fostering a robust security culture becomes essential. Embedding security awareness and proactive behaviors into the organizational culture ensures that every employee understands their role in safeguarding sensitive information and digital assets. This human-centric approach provides a vital first line of defense, empowering individuals to act as security champions and take a proactive role in mitigating associated risks.4. Adapting to AI regulationsState-level AI regulations in the US will present significant challenges for CISOs in 2025. States such as Colorado, California, and Utah have already passed private-sector AI rules with varying effective dates, creating a complex compliance landscape. The absence of a pre-emptive federal approach means that organizations must navigate a patchwork of reporting, assessment, and governance requirements.Fortunately, frameworks like NISTs AI RMF and ISO 42001 offer a common foundation for compliance, enabling organizations to demonstrate their commitment to ethical and secure AI practices. Preparing for these requirements, along with global mandates such as the EU AI Act, will be a critical focus for cybersecurity teams in the coming year.5. Preparing for post-quantum cryptographyThe release of NISTs post-quantum encryption tools marks a pivotal moment for cybersecurity planning.The harvest now, decrypt later strategy employed by adversaries underscores the urgency of transitioning to post-quantum cryptography. Organizations must define multiyear strategies to implement these new standards to safeguard sensitive data against future quantum threats. Early adopters of post-quantum cryptography demonstrate not only technical readiness but also a commitment to customer data protection. CISOs who act decisively in 2025 will position their organizations as leaders in cybersecurity resilience.As we look ahead to 2025, the challenges facing CISOs, and cybersecurity teams are complex and multifaceted. From navigating SEC disclosure requirements and managing AI-related risks to strengthening security culture and preparing for post-quantum threats, proactive planning and strategic action are essential.By staying ahead of these trends, organizations can strengthen their defenses, protect critical assets, and maintain trust in an increasingly interconnected and digital era.0 Comments ·0 Shares ·96 Views
-
Iowa Grapples with Data Centers and Demand for Waterwww.informationweek.comOver the past decade, corn and soybeans arent the only things sprouting from fertile Iowa soil. Data centers have been popping up with growing regularity. Tech giants like Apple, Google, Meta, and Microsoft have flocked to the Hawkeye State due to ample land, low energy costs, minimal earthquake risk, and generous tax incentives.But as climate change accelerates and water tables drop in some regions, a critical debate is surfacing. Groundwater in Iowa is not evenly distributed. Data centers in one place can be very different than in another, observes Keith Schilling, state geologist and director of the Iowa Geological Survey at the University of Iowa.Not surprisingly, the growing demand for water is trickling into policy, consumption patterns, environmental impacts and costs. Amid competing demands -- agriculture, business and residential use -- Iowa officials are taking notice. With increased demand and continued drought, aquifers arent being recharged as they were in the past, Schilling notes.Into the FlowAt present, 34 data centers exist across Iowa. Each day, these facilities consume somewhere between 300,000 gallons and 1.25 million gallons of water for cooling. At any given location, they typically account for about 2% to 8% of total water consumption -- though some facilities have agreements to use more water, if necessary.Related:Much of this water comes from underground aquifers. Under normal circumstances, theres sufficient rainfall to support agriculture, manufacturing and home use, but for more than a decade most of the state has endured a drought. This has translated into a need to pump additional water for crops, food processing, and ethanol production. According to Iowa Environmental Council, some wells are now operating at 20% of their original capacity.To be sure, the water situation varies considerably across the state -- and even within regions. For example, the state is bounded by the Missouri River and Mississippi River and other waterways run through Iowa. The eastern third of the state has abundant shallow carbonate aquifers that are recharged every year with precipitation, Schilling says. In central and western Iowa, the conditions are less favorable. The groundwater systems are shallower and more vulnerable.Twenty of the states data centers are in the Des Moines area, which is near the center of Iowa. So far, this part of the state has avoided problems associated with droughts and water shortages. But as newer facilities come online, water consumption increases and drought lingers, questions about water availability are growing -- particularly in more vulnerable areas in the western part of the state.Related:In many cases, companies looking to build data centers are using criteria other than water availability to select a site, Schilling says. They are often more interested in the surrounding infrastructure.Beneath the SurfaceBalancing water use across agriculture, manufacturing firms and tech companies -- while keeping rates down for residential users -- is a balancing act based in both economics and sustainability. When fixed costs can be spread amongst more users, everyone experiences lower rates -- including residential customers, says Roy Hesemann, utilities director for Cedar Rapids.Located toward the eastern portion of the state -- adjacent to the Cedar River -- the city has sufficient water and energy resources to support major data centers, Hesemann says. Cedar Rapids recently approved a $576 million Google data center that will use 200,000 to 1 million gallons of water daily and pull 25 megawatts of electricity. The project will generate $1 billion in local property taxes over 20 years (with about $529 million flowing back to Google).Related:The facility wont place additional pressure on supply or impact water standards. However, It will require us to accelerate timelines for planned expansion at the Northwest Water Treatment Plant to meet future capacity needs, Hesemann says. For now, the citys water rates rank in the middle for Iowa. Gaining additional efficiency is important. The city is exploring ways to reuse water, including discharge from data centers.Google is planning another $1 billion data center in Council Bluffs, which sits in the more arid western portion of the state. Once completed, the facility will add capacity to two other facilities built in 2012 and 2015. The expansion will result in a total of three datacenters that comprise nearly 3 million square feet of space split among 3 buildings. Google has invested over $5 billion in the region since 2007.Although these projects make economic sense, observers such as Schilling are taking notice and advocating for a better understanding of how data centers impact water consumption in Iowa. With ongoing drought conditions and growing demand for limited water resources, some aquifers are not being recharged adequately, he explains. Some areas may not be ideal locations for data centers.Data StreamsThe Iowa legislature is taking notice. In 2024, it designated $250,000 to map aquifers and study groundwater levels to gain a better understanding of how various user groups -- including data centers -- impact recharge rates and water levels. Governor Kim Reynolds described the funding as crucial for the development of models for budgeting this state's water resources."A growing dependence on data centers may be cause for concern, but digital technologies might also provide answers for Iowa. In July 2021, Iowa State University announced that it had landed $20 million grant from the US federal government to establish an AI Institute for Resilient Agriculture. It is developing digital twins, robotics, drones and connected field sensors that reduce water and chemical use while boosting crop yields.It makes sense for companies to establish data centers and other facilities in Iowa because of land availability, water resources, financial incentives and renewable energy, says Soumik Sarkar, a professor of mechanical engineering and computer science at Iowa State University. At the same time, we have witnessed a 20% to 30% increase in water demand for large AI companies and the state is coping with a drought. So, we must find ways to manage resources better.Another five-year, $7 million federal government funded project called COALESCE (COntext Aware LEarning for Sustainable CybEr-agricultural systems) is helping researchers at Iowa State University study ways to embed digital technologies, including AI, deeper into the food production system. AI and other tools can help us optimize processes, reduce pesticides and pollution and maximize our water systems, Sarkar states.Its important to understand how data centers and existing infrastructure impact water use -- and aquifers in Iowa, Schilling says. We dont want to reach a point where shortages occur, and people claim that the situation has taken them by surprise. We have the technology and tools to manage water resources effectively. We must use them to determine where to locate data centers and other facilities.0 Comments ·0 Shares ·111 Views
-
The Cost of AI: Navigating Demand vs Supply for AI Strategywww.informationweek.comIt is time to get real about scarcity of trained AI pros, priciness of technology and data, and the potential for plans to stall before grand AI strategies can launch.0 Comments ·0 Shares ·95 Views
-
Your Stack Is Limiting Your Teams Growth Potentialwww.informationweek.comIlya Khrustalev, Fractional CTO & Tech AdvisorFebruary 12, 20254 Min ReadAldar Darmaev via Alamy StockAn appropriately chosen technology stack enhances productivity, harmonizes work processes, and encourages employees to deliver high-quality results. Conversely, an incorrect choice can lead to a situation where the tools and technologies fail to meet the specific demands of the job or do not align with the skills and expertise of the team members.A misaligned data ecosystem leads to an extended learning time. At the beginning of a project, software specialists invest time in mastering new tools, which may slow down the process of development itself. Even after acquiring proficiency in the requisite technologies, engineers may struggle with their application leading to an increased number of bugs, higher maintenance costs and decreased efficiency of the entire development process.Using misaligned solutions negatively impacts team performance and innovation in the long term. When I joined a co-living marketplace, I observed that the team were using multiple frontend technologies, which led to a fractured codebase. Implementing standardization tools improved productivity and code quality, and sparked effective ideas like SEO improvements, server-side rendering, and A/B testing.Misaligned Tech Stacks Limit Professional DevelopmentRelated:A misaligned tech stack limits a team's ability to acquire new skills and grow professionally. Outdated technologies dampen engineers motivation and foster a sense of apathy. Developers dedicate more time to maintaining outdated systems and have fewer chances to explore innovative technologies.I saw such a situation at Mail.ru Games where engineers were tasked with maintaining outdated games. The ongoing process of bug fixing depleted their motivation and limited their capacity to implement creative solutions. Sunsetting these old projects allowed engineers to focus on new technologies and foster professional growth.Misalignment is especially harmful to remote workers. A study revealed that 60% of remote professionals feel their tech stacks are inadequate, which leads to wasted time in meetings and searching for information across disconnected applications. Enhancing the remote stack is more difficult but essential for productivity in hybrid work environments.Team leads frequently manage cross-functional teams, with distinct technologies. Consequently, it becomes crucial to harmonize these tools so that none of the components become a bottleneck. A well-structured stack supports collaboration, maximizing productivity and achieving business objectives. Research indicates that organizations with strong cross-functional collaboration see nearly double the revenue growth.Related:A streamlined technology stack significantly impacts scalability, especially for startups that need quick iteration cycles. Well-documented, widely used technologies allow teams to grow quickly and ensure flexibility in response to rapidly changing requirements.Signs Your Tech Stack Is Stifling Team GrowthAnother sign that the technology is hindering the team's ability to do its job well is when developers often need assistance from a team lead or senior engineers. This happens when a company does not have a centralized knowledge base where all information is organized according to processes, such as in Confluence.Less obvious signs include difficulty in onboarding new team members. At the co-living marketplace where I worked, the rapid adoption of multiple frontend technologies led to complexity, and the team was unable to innovate or launch new features. In contrast, at a dating app Badoo, we had a well-structured tech stack that enabled onboarding within a day.Increasing technical debt and low bus factor -- where only a few engineers are experts in certain technologies -- also stifle growth.Related:In general, the CTO can take practical steps to determine if the technology stack is limiting the potential of their team and realign the stack for growth:Conduct a thorough inventory of company technologies. This software asset management is especially important for large organizations where the use of stacks may lack standardization.Identify outdated frameworks and libraries that are no longer under long-term support. These can reduce productivity and pose security risks.Gather feedback from teams to identify concerns about the current stack, including technical limits and engineers' feelings about using it. Employees can provide insight into tools that boost or hinder productivity.Develop a prioritized plan to upgrade or phase out outdated technologies based on measurable benefits such as developer productivity and software costs, as well as subjective factors such as team satisfaction.Discuss the proposed changes with the team leads and key stakeholders. Start with smaller groups to get their feedback and then expand the conversation to the entire development team to gain approval.An appropriately chosen tech stack is a strategic asset that can greatly enhance team productivity and employee morale. By optimizing workflows and meeting the needs of employees, organizations can create a more efficient and rewarding work environment, ultimately leading to greater success of a product in a competitive landscape.About the AuthorIlya KhrustalevFractional CTO & Tech AdvisorIlya Khrustalev is a tech advisor and fractional CTO with over 20 years of experience, specializing in helping startups and tech companies achieve ambitious goals through strategic technology transformations. Advising different startups, he helps to develop innovative technological solutions for the companies products. Ilya focuses on scaling operations while fostering meaningful social impact. His expertise, honed at corporations like Badoo, Yandex, and Lamoda, along with his entrepreneurial background, enables him to seamlessly bridge the gap between technology and business. He delivers tailored solutions that streamline processes, reduce costs, and align teams, empowering organizations to scale efficiently and navigate change with confidence.See more from Ilya KhrustalevNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·76 Views
-
Will AI Chip Supply Dry Up and Turn Your Project Into a Costly Monster?www.informationweek.comShane Snider, Senior Writer, InformationWeekFebruary 12, 20256 Min ReadTithi Luadthong via Alamy StockWith companies racing to add generative AI (GenAI) capabilities to their business arsenal, ammo may be running low as AI chip demand is on pace to outpace supply -- a concern that may have IT leaders looking for creative solutions in the coming years.Nvidia has become the darling of AI, with its powerful graphics processing units (GPUs) multiplying in data centers around the world. The companys popular wares dont come cheap. Its latest Blackwell GPU fetches more than $40,000 per unit, while its last generation Hopper H100s commands a respectable $30,000. AMD, which produces the cheaper MI300X GPU between $10,000 and $15,000, expects data center demand for its AI chips segment to drive big revenue growth.A report from Bain & Company said businesses should expect a likely AI chip shortage, with AI driving demand for components by 30% or more by 2026. A demand increase of 20% or more would be enough to put a damper on the AI chip supply chain. And its not just data center GPUs facing a crunch -- demand for AI-enabled mobile devices and PCs will spur upgrades in the coming months and years.As the technology sector learned in the days of the pandemic, the chip supply chain is fragile. COVID-19 drove a huge increase in sales of PCs to handle remote work needs, but supply chain shortages created a bottleneck.Related:The semiconductor supply chain is incredibly complex, and a demand increase of about 20% or more has a high likelihood of upsetting the equilibrium and causing a chip shortage, the report stated. The AI explosion across the confluence of the large end markets could easily surpass that threshold, creating vulnerable chokepoints throughout the supply chain.If demand projections hold at the current trajectory, key components for semiconductors would need to almost triple production capacity by 2026, the report says.In a live interview with InformationWeek, Anne Hoecker, global head technology, media and telecom with Bain & Company and one of the reports authors, says while demand is currently high, supply is plentiful -- for now. As we look at it there are probably a few things that could drive a chip shortage, she says, noting rapid developments in the AI space over the last couple of years.What would really drive capacity tightness is if theres the killer AI app that everyone is still kind of waiting for -- something that really makes AI PCs take off, or an upgrade on your phone cycle. That would drive a lot of demand for a lot of different types of semiconductors across different notes that could really drive tightness across the board on supply chains, she says.Related:Joseph Hudicka, a supply chain expert and adjunct professor at Rider University, says such a chip shortage would drive AI project costs higher. Basic laws of economics reflect that high demand and low supply increases prices and continues to increase those prices until a market reaches the other side of the bullwhip when supply again outpaces demand, he tells InformationWeek in an email interview. This phenomenon naturally presses project costs to exceed budgets.Supply Chain ConstraintsAmerican companies like AMD and Nvidia count on Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture their chips. So far, the company has helped the US maintain a competitive lead in AI chip production over China. Efforts to diversify the semiconductor supply chain are underway, with TSMC planning to ramp up production at a new production facility in Arizona. Intel and Samsung are also ramping up their manufacturing efforts in the US -- with the help of the $52.7 billion CHIPS Act.But those projects are still years from coming online.TSMC has been building in Arizona for several years now, and even when they are able to bring up their capacity, its still going to be a small portion of what they produce in Taiwan. This is something thats going to take decades, Bain & Companys Hoecker says.Related:And while we havent reached the point of a shortage, TSMC is racing to keep up with spiking demand."I tried to reach the supply and demand balance, but I cannot today, TSMC CEO C.C. Wei said in an earnings call in July. "The demand is so high, I had to work very hard to meet my customer's demand. We continue to increase."Wei said AI chip inventories "continue to be very tight all the way through probably 2025 and hopefully can be eased in 2026.""We're working very hard, as I said, wherever we can, whenever we can, he said. All my customers ... are looking for leading-edge as a capacity for the next few years, and we are working with them.And TSMC is particularly vulnerable because of the countrys contentious relationship with China, which continually threatens military intervention. A military conflict could have a disastrous impact on the semiconductor supply chain. If something big were to happen there, the impact would be massive, Hoecker says. Thats why a lot of companies are pushing TSMC to diversify.Hudicka agrees. Expect to see ever-increasing investments in chip production in other markets like IndiaWhat IT Leaders Can DoCIOs and other IT leaders face tremendous pressure to quickly develop GenAI strategies in the face of a potential supply shortage. With the cost of individual units, spending can easily reach into the multi-million-dollar range.But it wouldnt be the first time companies have dealt with semiconductor shortages. During the COVID-19 pandemic, a spike in PC demand for remote work met with global shipping disruptions to create a chip drought that impacted everything from refrigerators to automobiles and PCs.One thing we learned was the importance of supply chain resiliency, not being overly dependent on any one supplier and understanding what your alternatives are, Hoecker says. When we work with clients to make sure they have a more resilient supply chain, we consider a few things One is making sure they rethink how much inventory do they want to keep for their most critical components so they can survive any potential shocks.She adds, Another is geographic resiliency, or understanding where your components come from and do you feel like youre overly exposed to any one supplier or any one geography.Nvidias GPUs, she notes, are harder to find alternatives for -- but other chips do have alternatives. There are other places where you can dual-source or find more resiliency in your marketplace.Lastly, Hoecker says leaders must focus on forecasting. Forecasting is very critical for companies, but also quite a challenge in some of these markets that can be a bit cyclical. But the more you can forecast your own demand and work closely with your suppliers to make sure they understand what your forecast is, the more likely that the whole supply chain will have enough capacity.Real-time communication with suppliers is also a key, says Hudicka. IT leaders should digitize communication signals of supply and demand with a mix of suppliers. This delivers a first-mover advantage as global macro and micro signals of supply and demand rise in intensity from ebbs and flows -- to tsunami-sized shifts.Read more about:Supply ChainChip ShortageAbout the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·74 Views
-
Is AI Driving Demand for Rare Earth Elements and Other Materials?www.informationweek.comArtificial intelligenceis changing the world in innumerable ways. But its not all chatbots and eerily realistic images. This technology, for all its surreal qualities, has a basis in the material world. The materials that power its capabilities range across the periodic table -- from easily accessible elements such as silicon and phosphorus to rare earth elements (REEs), derived from complex purification processes.Rare earth elements are a series of 15 elements ranging from atomic numbers 5771 on the periodic table called the lanthanide series, along with two other elements (21 and 39) with similar properties. They are divided into light and heavy categories. Heavy rare earth elements, which have higher atomic numbers, are less common.The light rare earths are lanthanum, cerium, praseodymium, neodymium, europium, promethium, samarium, and gadolinium. The heavy rare earths are yttrium, terbium, dysprosium, holmium, erbium, thulium, ytterbium, and lutetium. Scandium falls outside the two categories.These metals are not actually rare -- they just exist in low concentrations and are difficult to extract. They are crucial components of the semiconductors that provide the computing power that drives AI. They possess uniquely powerful magnetic qualities and are excellent at conducting electricity and resisting heat.Related:These qualities make them excellent for graphics processing units (GPUs), application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs). REEs are also crucial to sustainable energy production that supposedly offsets the drain on the power grid by AI -- notably wind turbines.The market for these metals is expected to reach $10 billion in the next two years.If recent headlines are to be believed, some of these materials are becoming increasingly scarce due to supply chain issues. China has throttled the export of REEs and other critical materials. It produces some 70% of global supply and processes around 90% of REEs.Whether that is a genuine concern is debated. It has certainly resulted in trade tensions between China and the West. But other countries, including the United States, are attempting to ramp up production and prospects in the deep sea may offer additional sources.InformationWeek investigates, with insights from David Hammond, principal mineral economist at chemical manufacturer Hammond International Group, and Ramon Barua, CEO of rare earths supplier Aclara Resources.Which Elements Are Required to Power AI?Related:Semiconductors comprise some 300 materials -- with REEs and other critical minerals among them. Among the most crucial components are cerium, europium, gadolinium, lanthanum, neodymium, praseodymium, scandium, terbium, and yttrium as well as critical minerals gallium and germanium.Some REEs are used in the manufacturing process and others are integrated into the chips themselves -- used to dope other materials to alter their conductive properties. The performance of gallium nitride and indium phosphide are enhanced by doping with europium and yttrium, for example. And layers of oxides formed from gadolinium, lanthanum, and lutetium have improved logic and memory performance.The proportions of the materials used in semiconductors are largely trade secrets -- and thus the demand for specific REEs and other critical minerals for semiconductors is difficult to determine. But they are likely not the major driver of extraction of these elements.The usage of rare earths in semiconductors is really a minor aspect of all rare earth demand, Hammond claims. I don't believe it will ever be a major demand driver for rare earths. Less than 10%, probably 5%.Dysprosium, neodymium, praseodymium, and terbium are essential components of the magnets used in wind turbines -- which comprise a portion of the sustainable energy used to supposedly offset AI energy drain. Hammond thinks that demand for these REEs, also used in generators and solar panels, will be the major driver for extraction and consumption of REEs. Whether that demand will compete with demand from the semiconductor industry remains unknown.Related:The need for these other applications is probably going to create that marginal supply that is going to be used by semiconductors, Barua predicts.Additional elements, such as gallium, germanium and compounds such as high-purity aluminum (HPA) are also essential. Common elements including silicon and copper play key roles as well. Demand for copper is expected to grow significantly -- by up to a million metric tons in the next five years.Many of these elements, though crucial, are only required in small quantities. Last year, the US required 19 metric tons of gallium, Hammond says. That's basically 19 pickup trucks of gallium. The panic was so vastly exaggerated to be almost in the realm of stupidity.How Available Are These Elements?China has a monopoly on REEs, both in terms of extraction and processing. It produced more than 240,000 metric tons in 2023. But REEs are also found elsewhere -- the US, Australia, India, Myanmar, Russia, and Vietnam. They are relatively common and usually found together, in varying levels of abundance.China only holds around 40% of the world's reserves of these minerals. China was not always the primary producer -- prior to the 1980s, the US was dominant. But Chinas more lax environmental regulations proved advantageous and by the late 1990s had the upper hand in terms of availability and processing technology.While China currently has a stranglehold on supply and processing, other countries are investigating how to leverage their own reserves of REEs. The US and Australia still manage to extract substantial amounts of these minerals. The processing technology required to turn these elements into usable materials is perhaps the most pressing issue -- countries that extract REEs usually send them to China for refinement.The big issue for rare earths isnt so much finding them. Its processing them, Hammond observes. It requires a challenging chemical process to extract the individual components.David Hammond, Hammond International GroupThe companies producing rare earths are pretty sticky about talking about it -- for competitive reasons. But also, nobody really knows what the demand is going to be. Nobody really knows what the supply is going to be, he adds.China also has significant supplies of other minerals critical to semiconductor manufacturing. It has a near monopoly on gallium and produces close to 70% of the worlds supply of germanium. It also has significant supplies of fluorine, which is essential in chip manufacturing, but other countries including Mexico also have reserves. Copper has also proven to be a major element in improving the speeds of semiconductors -- and while China does have copper resources and significant refining capabilities, countries such as Chile and Peru do as well and will likely offer sufficient supply to the Western world.How is International Trade Affecting Their Availability?Chinas near monopoly on rare earths and other critical materials has the Western world scrambling for other sources. In 1987, Chinese leader Deng Xiaoping said, The Middle East has oil, China has rare earths.China has leveraged these resources strategically in the past several decades, limiting global exports in 2009, reducing exports to Japan in 2010 following a conflict over a disputed territory and further throttling global supply in 2011. The country reversed course in 2015 following a 2014 World Trade Organization (WTO) decision that found its restrictions violated WTO agreements. The decision ultimately did little to quell the escalating chip wars between China and the West.In July 2023, China placed export limits on gallium and germanium, two critical mineralsthatare essential to the machines used to create semiconductor chips. The US instead sourced these metals from Japan and Belgium. In November 2023, China instituted stringent new reporting requirements for a variety of critical minerals, including rare earths and the following month banned export of technology involved in rare earth refinement. They were further tightened in October 2024. And in December 2024, it banned exports of antimony, gallium, germanium and several other elements to the US.The US has parried these restrictions with its own policies limiting exports of semiconductors and the technology used to manufacture them, notably in 2018, 2022 and 2024, leading China to ramp up its efforts to develop its own techniques and equipment. So, too, the US and its partners are attempting to accelerate their own efforts to mine and refine rare earths and other critical minerals.Still, Hammond cautions that hyperbolic media coverage may be overstating the issue. While China and the West are in competition, it comes down to business strategy, he thinks.What Are the Alternatives?Even if the reduced supply of rare earths and other minerals from China is ultimately a minor issue for the semiconductor industry, it clearly behooves the West to seek other sources of these materials -- and to figure out how to extract them with minimal environmental impact.This was underscored by a 2020 executive order urging greater domestic production in the US and in allied nations. The CHIPS (Creating Helpful Incentives to Produce Semiconductors) and Science Act passed in 2022 aims to facilitate greater domestic production through grants to support research on the subject.The Mountain Pass mine in California, reopened in 2017 following years of closures and other incidents, provides some 15% of the global supply of rare earths. It is the only active mine in the US, though multiple other prospects have been identified in locations such as Texas and Wyoming. It is difficult to tell which are viable, Hammond says. Some may not ultimately be productive.Though significant efforts have been put into extracting REEs from waste products, Hammond thinks they are probably futile. We spend all this money and we're not even the first step towards a commercial process, he says. I don't think we ever will be, because its just technically too hard.Drone surveys initiated by the Defense Advanced Research Projects Agency (DARPA) are aimed at identifying new sources using spectroscopic analysis. Even in the event of viable discoveries, refinement technology lags Chinas -- most rare earth elements extracted outside China are still refined there. Still, the West and its allies do hold substantial reserves of other critical minerals, which will likely provide additional leverage.Some of the shortfall is made up by a mine and concentration facility in Australia and another separation facility in Malaysia operated by the Lynas Corporation, which is also building a refinement facility in Texas. Barua explains that his company has discovered ionic clays in Chile and Brazil. They plan on extracting them using a contained process that does not have the severe environmental impacts that have plagued REE processing in China.Ramon Barua, Aclara ResourcesA Belgian rare earth refinery in France set to open in 2025 hopes to source some 30% of its materials from recycled electronics.Barua, however, is skeptical that recycling of rare earth magnets will offer significant supply. It is probably going to be a miniscule market, he says. Theres no way that we can depend on that to feed whats coming.Chinas low prices are a major hurdle for mining operations in other countries, he adds. The only reference that we have is the Chinese price. That price being low then prevents operations from being financially feasible or profitable. Its a challenge for rare earth projects to develop in the Western world.Initiatives to mine critical minerals, including rare earths, from deep sea deposits are also underway. Polymetallic nodules in some deep-sea abysses may eventually offer significant quantities of cobalt, copper, manganese, nickel, and other elements. Projects assessing the viability of extracting them have been initiated, but have been held up by regulatory issues, largely due to potential environmental impacts.Companies are also devising technologies that do not rely on rare earths at all, which may take some of the pressure off on the demand side -- in some cases using AI to do so. In the meantime, semiconductor manufacturers will have to make do with an uneven and unpredictable REE market.0 Comments ·0 Shares ·121 Views
-
Cooling AI: Keeping Temps Downwww.informationweek.comJohn Edwards, Technology Journalist & AuthorFebruary 11, 20256 Min ReadTithi Luadthong via Alamy StockData centers are one of the most energy-intensive building structures, consuming 10- to 50-times more energy per square foot than a typical commercial office building and accounting for approximately 2% of the nation's total electricity consumption, says Todd Grabowski, president of global data center solutions at Johnson Controls, an HVAC and facilities management firm, citing US Department of Energy statistics.In an email interview, Grabowski notes that a rapid shift to AI workloads is driving data center energy demand to record high levels, with AI tasks now consuming up to 10-times more power than conventional IT operations. High-performance computing racks will require 100 to 120 kilowatts (kW) per rack in the near future, he predicts.Data centers specifically designed to handle AI workloads generally rely on servers using a graphics processor unit (GPU), a device initially designed for digital image processing and to accelerate computer graphics. A major drawback of these systems is that they generate a high thermal design power (TDP), meaning they produce a large amount of heat per processor, per server, and per rack.AIs Thermal ImpactWhen running AI processes, GPUs can consume over a kilowatt of power, much higher than classical CPUs, which typically require a maximum of approximately 400 watts, says Nenad Miljkovic, a professor in the mechanical science and engineering department at the University of Illinois Urbana-Champaign. "Pure air cooling will not work for the majority of AI servers, so liquid cooling is required," he states in an online interview. "Liquid is better than air, since it has better properties, including higher thermal conductivity and heat capacity." Drawbacks, however, include higher cost, reduced reliability, and greater implementation complexity.Related:GPU-based servers are designed and used for high-performance computing, which can process substantial amounts of data quickly, Grabowski says. He observes that AI clusters operate most efficiently when latency is reduced by utilizing high-bandwidth fiber optic connections, strategically placed servers, and an optimized network topology that minimizes data travel distance. Grabowski predicts that most future data centers will feature dense racks generating a large amount of heat and packed into multi-story facilities.The real issue facing data center operators isn't cooling, but energy management, states David Ibarra, international regional leader with datacenter builder DPR Construction. "The industry has substantial operational experience in effectively cooling and managing cooling systems for large-scale data centers," he explains in an online interview. "The primary challenge facing AI datacenter operators is the increased power densities of GPU rack clusters within the server racks." Ibarra notes that cooling loads diversification requires managing not only new GPU racks, but also CPU-based racks, storage, and network racks. "Therefore, engineering and planning must consider the varying characteristics of cooling loads for each type of rack."Related:Seeking SustainabilityAs demand increases, a growing number of data center operators are transitioning from traditional air-cooling to a hybrid cooling system combining both liquid and air-cooling technologies. "This change is driven by the increasing demand for large AI GPU racks, which require liquid cooling to efficiently remove heat from their high-core-count processors," Ibarra says.To advance sustainability, Miljkovic suggests locating data centers close to renewable energy sources. "For example, near a nuclear power plant, where power is abundant, and security is good."Solar and wind power are often touted as solutions by green advocates yet aren't generally considered practical given the fact that new data centers can easily consume over 500 megawatts of power and frequently exceed a gigawatt or more. A more practical approach is using data center-generated heat, Miljkovic says. "All of the heat generated from the data center can be re-used for district heating if coolant temperatures are allowed to be higher, which they can [accomplish] with liquid cooling."Related:Additional AlternativesA growing number of AI data centers are being designed to mimic power plants. Some are actually being built on decommissioned power plant sites, using rivers, lakes, and reservoirs for cooling, says Jim Weinheimer, vice president of data center operations at cloud services provider Rackspace. "These [facilities] must be carefully designed and operated, but they have huge cooling capacity without consuming water," he observes via email.Local climate can also play an important role in data center cooling. Cold weather locations are increasingly favored for new data center builds. Lower ambient temperatures reduce the amount of cooling needed and, therefore, the need for water or other coolant required by the AI data center, says Agostinho Villela, Scala Data Centers' chief innovation and technology officer,in an online interview. Alternatively, closed loop systems can be used to conserve water, since they reduce the need to draw on external water sources. Data center heat recovery systems can also reduce the aggregate need for power by providing facility heat in the winter.AI-driven cooling optimization technology is also beginning to play a crucial role in sustainable data center operations. By deploying machine learning algorithms to monitor and manage cooling systems, data centers can dynamically adjust airflow, liquid flow, and compressor activity based on real-time thermal data. "This adaptive approach not only prevents energy wastage but also extends the lifespan of hardware by maintaining consistent and efficient cooling conditions," Villela says. "Such systems can even predict potential equipment overheating, enabling preemptive measures that reduce downtime and additional energy expenditures."Looking ForwardLimitations in chip size and density will eventually force data center operators to explore new designs and materials, including facilities that may completely change the way data centers operate, Weinheimer predicts. "It will be a combination of factors and new technologies that allow us to make the next leap in computing power, and the industry is very motivated to make it a reality --thats what makes it so exciting to be part of this industry."Considering the number of cooling methods being tested and evaluated, the only thing that seems certain is continued uncertainty. "Its a bit like the Wild West," Miljkovic observes. "Lots of uncertainty, but also lots of opportunity to innovate."Read more about:Cost of AIAbout the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·119 Views
-
How Enterprise Leaders Can Shape AIs Future in 2025 and Beyondwww.informationweek.comOnce confined to narrow applications, artificial intelligence is now mainstream. Its driving innovations that are reshaping industries, transforming workflows, and challenging long-standing norms.In 2024, generative AI tools became regular fixtures in workplaces, doubling their adoption rates compared to the previous year, according to McKinsey.This surge in adoption highlights AIs transformative potential. At the same time, it underscores the urgency for businesses to fully grasp the opportunities and significant responsibilities that accompany this shift.AIs applications are astonishingly broad, from personalized healthcare diagnostics and real-time financial forecasting to bolstering cybersecurity defenses and driving workforce automation. These advancements promise substantial efficiency gains and insight, yet they also come with profound risks. For enterprise IT managers, who often spearhead these initiatives, the stakes have never been more significant or more complex.The years ahead likely will be defined by how adeptly businesses can navigate this duality. The immense promise of transformative AI innovation is counterbalanced by the equally critical need to mitigate risks through robust data validation, human-in-the-loop systems, and proactive ethical safeguards. As we head into 2025, these three themes will drive the future of AI.Related:Human-Machine Interaction Will GrowThe promise of AI lies not in replacing human oversight but in enhancing it. The increased adoption of AI means it increasingly will integrate into workflows where human judgment remains essential, particularly in high-stakes sectors such as healthcare and finance.In healthcare, AI is revolutionizing diagnostics and treatment planning. Systems can process vast amounts of medical data, highlighting potential issues and providing insights that save lives. Yet, the final decision often rests with clinicians, whose expertise is essential to interpreting and acting on AI-generated recommendations. This collaborative approach safeguards against over-reliance on technology and ensures ethical considerations remain central.Similarly, in financial services, AI aids in risk assessment and fraud detection. While these tools offer unparalleled efficiency, they require human oversight to account for nuances and contextual factors that algorithms may miss. This balance between automation and human input is critical to building trust and achieving sustainable outcomes.Deploying AI responsibly requires enterprise IT managers to prioritize systems that maintain this collaborative framework. Setting the stage for responsible use requires implementing mechanisms for continuous oversight, designing workflows that incorporate checks and balances, and ensuring transparency in how AI tools arrive at their outputs.Related:AI Accuracy Is Even More ImportantAccurate AI systems are critical in fields where errors can have far-reaching consequences. For example, a health misdiagnosis resulting from faulty AI predictions could endanger patients. In finance, an erroneous risk assessment could cost organizations millions. One key challenge is ensuring that the data feeding these systems is reliable and relevant. AI models, no matter how advanced, are only as good as the data they are trained on. Inaccurate or biased data can lead to flawed predictions, misaligned recommendations and even ethical lapses. For instance, financial models trained on outdated or incomplete datasets may expose organizations to unforeseen risks, while medical AI could misinterpret diagnostic data.But capitalizing on what AI has to offer requires more than just accurate, clean data.The selection of the right model for a given task plays a crucial role in maintaining accuracy. Over-reliance on generic or poorly matched models can undermine trust and effectiveness. Enterprises should tailor AI tools to specific datasets and applications, integrating domain-specific expertise to ensure optimal performance.Related:Enterprise IT managers must adopt proactive measures like rigorous data validation protocols, routinely auditing AI systems for biases, and incorporating human review as a safeguard against errors. With these best practices, organizations can elevate the accuracy and reliability of their AI deployments, paving the way for more informed and ethical decision-making.Regulatory Focus Will Be NarrowAs AI continues to evolve, its growing influence has prompted an urgent need for thoughtful regulation and governance. With the incoming administration prioritizing a smaller government impact, regulatory frameworks will likely focus only on high-stakes applications where AI poses significant risks to safety, privacy and economic stability, such as autonomous vehicles or financial fraud detection. Regulative attention could intensify in sectors like healthcare and finance as governments and industries strive to mitigate potential harm. Failures in these areas could endanger lives and livelihoods and erode trust in the technology itself.Cybersecurity is another area where governance will take center stage. The Department of Homeland Security recently unveiled guidance for how to use AI in critical infrastructure, which has become a target for exploitation. Regulatory measures may require organizations to demonstrate robust safeguards against vulnerabilities, including adversarial attacks and data breaches.However, regulation alone is not enough. Enterprises must also foster a culture of accountability and ethical responsibility. This involves setting internal standards that go beyond compliance, such as prioritizing fairness, reducing bias, and ensuring that AI systems are designed with end-users in mind.Enterprise IT managers hold the keys to striking this balance by implementing transparent practices and fostering trust. By acting thoughtfully now, organizations can harness AI to drive innovation while addressing its inherent risks, ensuring it becomes a cornerstone of progress for years to come.0 Comments ·0 Shares ·118 Views
-
How to Avoid Common Hybrid Cloud Pitfallswww.informationweek.comLisa Morgan, Freelance WriterFebruary 10, 20259 Min ReadBrain light via Alamy StockOrganizations continue to fine-tune how they approach data, applications, and infrastructure. The latest move is pulling some data and workloads back to on-premises data centers. The question is whether theyre applying what they learned in the cloud, especially when it comes to private cloud and hybrid implementations.They need to be hybrid by design in terms of intentionally aligning business and technology. It has to be thought through and designed [as] a hybrid cloud architecture, so they dont think, Hey, Im going on prem, so I need to do virtualization here, says Nataraj Nagaratnam, CTO for AI governance and cloud security at IBM. The policies, processes, and organizational construct will enable this transformation, and we see this happening increasingly.What they should be doing is taking the learnings from their move to cloud and having an intentional hybrid strategy.I think AI is an opportunity to get your data right because data feeds the AI. To create value, you need to know where your data is, so governance is important, which in turn means do you have a hybrid landscape in place and a view of your digital assets, data assets and applications? says Nagaratnam.Common PitfallsOne common pitfall that organizations experience is moving entirely to cloud without being intentional about workload placement, Nagaratnam says. Another issue is underestimating the management complexity when theyve built different management control planes and have lost visibility. The third issue is they didnt understand the cloud services shared responsibility model.Related:Nataraj Nagartanam, IBMIts not only infrastructure, like cloud providers, but it is also business applications, software, software providers, SaaS providers, so bringing that together becomes important, says Nagaratnam. AI will shed more [light] on that shared responsibility, because it's no longer infrastructure only. If you think of a model provider, what's the risk? What's the responsibility of the model provider? So that notion of shared responsibility will continue to increase as you deal with data.More fundamentally, the complexity issue has been exacerbated by siloed departmental operations and mergers or acquisitions. Add to that inconsistent policies and significant skills gaps, and its a recipe for disaster.As companies grow their cloud infrastructure, it becomes more complex and presents a significant challenge to keep under control. This leads to unplanned cloud costs, security risks, production downtime, non-compliant cloud assets and misconfigurations in production, says AJ Thompson, chief commercial officer at IT consultancy Northdoor. Losing control means more cloud expenses and more potential downtime. While most companies appear to have mastered their migration to cloud and modernizing their applications, so they are cloud native, many struggle with the operation of cloud and cost containment. This is why we have seen some organizations move workloads back on-premises and why many operate in a hybrid environment.Related:Brian Oates, product manager of cloud VPS & cloud metal at Liquid Web, says the greatest failure in implementing hybrid clouds has to do with on-premises and cloud systems that are not integrated consistently. And without clear governance, there is also an inability to handle the data sprawl.Most of the hybrid cloud pitfalls are basically related to poor planning and strategy. Most of the time, organizations use a hybrid solution because of urgent needs, not in the frame of a long-term architectural strategy. Thus, there is a misjudgment in understanding workload compliance, performance and latency requirements of major importance, says Oates. Most of the organizations have taken too lightly the management of hybrid environments by considering that integrating modern cloud with legacy could be seamless.Related:Northdoors Thompson says monitoring hybrid clouds can be challenging because cloud services may not integrate easily with existing on-premises solutions. Interoperability issues and ensuring secure communication and seamless integration within the entire organizations infrastructure can be challenging. And, underestimating network latency can undermine hybrid cloud performance.One of the key reasons that organizations keep some of their workloads on premises is because they must adhere to strict industry standards surrounding the safeguarding of their data. Businesses must understand the implications of these regulations, including the most recent DORA and NIS2 regulations and how they apply to their hybrid cloud environment, says Thompson. This can become even more complicated for global businesses, as different territories often have their own unique requirements. Therefore, organizations must make sure they implement the appropriate governance and policies for their cloud resources.Ferris Ellis, CEO andprincipal at software engineering firm Urban Dynamics, says people make the mistake of taking the network for granted and just assume that it provides reasonably low latency and high bandwidth. This is not the case with hybrid cloud where connectivity is a problem and can cause an SLA failure. There are also the potential cloud egress fees to consider, depending on how the hybrid cloud network design is done.Ferris Ellis, Urban DynamicsNetworks tend to be taken for granted so people dont think about them, says Ellis. Second, it requires a more advanced network design than many IT departments are familiar with. They may be familiar with connecting a bunch of offices using a VPN or SD-WAN solution. But for serious workloads you need to have 10, 40, or even 100+ Gbps of reliable, low latency connectivity between one or more of your locations and multiple cloud regions. There are known ways to do this, but they require familiarity with the internals of the internet that remain hidden to most.Get the Right People InvolvedAside from the product teams whose workloads are moving, the obvious players are the platform and infrastructure teams. There are also a couple of less obvious groups, notably the security, risk, and compliance teams.In my experience, you need these teams to be bought in so they dont become barriers. You must ensure that the conversion doesnt increase risk, and that you are not giving up controls, says Jacob Rosenberg, head of Infrastructure at observability platform provider Chronosphere. In many cases, you can decrease risk, so while getting them bought in may take some work upfront, I think it can be a real win-win.Liquid Webs Oates Utility believes a hybrid cloud strategy needs many stakeholder groups to define and implement.It has to be IT-driven, as this is the team that will have expertise in infrastructure and system integrations, says Oates. On the other hand, business leaders take part in being onboard, which will ensure that it fits with organizational priorities. Compliance officers and the legal department should review requirements from a regulatory perspective and discuss the best ways to mitigate risk. Financial managers provide insights on cost implications and budgeting. Cybersecurity experts are also necessary for robust defense and data integrity assurance. Its essential to include the right business stakeholders to ensure the implementation meets the needs of the operation. Next is tapping external consultants or managed service providers who bring in new ideas and specialized expertise.The approach is sure to be comprehensive, yet practical, says Oates.How to Ensure a Smoother TransformationA hybrid-by-design approach is wise because it forces organizations to be mindful of what data and workloads they have and where they should be to meet business objectives. It also requires business and technology leaders to work together.Architecture that factors in the application layer and infrastructure is another critical consideration.Do you have a view of your data and the end state of your IT? Does the landscape accommodate a hybrid cloud architecture [using] a hybrid cloud platform like OpenShift and Kubernetes for applications? Where is your data? How are you consuming data? Is it as a service? What does your data pipeline look like? says IBMs Nagaratnam. Because data is not just going to stay somewhere. [It] has to move. It moves closer to application.Data must also move for AI models, inferencing, and agents, which means thinking about data pipelines in a hybrid context.Hybrid cloud architecture [should] take into account your workload placement and data decisions so that nothing can go to the public cloud or everything needs to stay on prem and whatever decisions there are, but take a risk-based approach, based on data sensitivity, says Nagaratnam. Create a path to continuous compliance and get ahead with AI governance.An ethos of continuous improvement is necessary because it helps ensure agility and more accurate alignment with business requirements.A hybrid cloud strategy should develop and evolve as your business and technology evolve. Base this on a small pilot project to refine the approach to find any challenges early in the process, says Liquid Webs Oates. Second, prioritize security, making extensive use of the zero-trust model and applying policy consistency across all environments. [Make sure to have] a great IT staff or partner who can help manage hybrid environment complexities. Invest in tools that will provide your team with a single source of visibility and automate routine tasks. In this way, enable your team to focus on more strategic work.Collaboration across departments ensures that the strategy fits business and regulatory purposes. Its also important to review workload placement to ensure effective cost control.Unexpected cloud costs [come] down to several factors, including inadequate planning, unforeseen disruptions, underutilized cloud instances, a lack of visibility and/or the need for additional resources. Therefore, a key requirement is to understand hybrid cloud pricing structures, as these can be extremely complex and vary from provider to provider, says Northdoors Thompson. Utilizing cloud without knowing what the business needs to pay for can lead to overspending on redundant or underutilized services.Chronospheres Rosenberg has observed two approaches to hybrid cloud that tend to have very different outcomes.The first is to make your public cloud looks like your on-prem infrastructure, and the second method is making their on-prem infrastructure look as cloud-native as possible, says Rosenberg. The former is often quicker and enables a lift and shift migration of workloads, but the second method maximizes the benefits of the cloud environment. For many companies, this means bringing in Kubernetes and refactoring applications to be cloud-native. I find the second method is more appealing because not only do you make your deployment, management, and observability of all your applications uniform across both environments, you also get the advantages of cloud-architecture combined with retaining the security and compliance benefits of remaining on-prem.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·59 Views
-
The Cost of AI: Power Hunger -- Why the Grid Cant Support AIwww.informationweek.comJoao-Pierre S. Ruth, Senior EditorFebruary 10, 20258 Min ReadTithi Luadthong via Alamy Stock PhotoRemember when plans to use geothermal energy from volcanoes to power bitcoin mining turned heads as examples of skyrocketing, tech-driven power consumption?If it possessed feelings, AI would probably say that was cute as it gazes hungrily at the power grid.InformationWeeks The Cost of AI series previously explored how energy bills might rise with demand from artificial intelligence, but what happens if the grid cannot meet escalating needs?Would regions be forced to ration power with rolling blackouts? Will companies have to wait their turn for access to AI and the power needed to drive it? Will more sources of power go online fast enough to absorb demand?Answers to those questions might not be as simple as adding windmills, solar panels, and more nuclear reactors to the grid. Experts from KX, GlobalFoundries, and Infosys shared some of their perspectives on AIs energy demands and the power grids struggle to accommodate this escalation.I think the most interesting benchmark to talk about is the Stargate [project] that was just announced, says Thomas Barber, vice president, communications infrastructure and data center at GlobalFoundries. The multiyear Stargate effort, announced late January, is a $500 billion plan to build AI infrastructure for OpenAI with data centers in the United States. Youre talking about building upwards of 50 to 100 gigawatts of new IT capacity every year for the next seven to eight years, and thats really just one company.Related:That is in addition to Microsoft and Google developing their own data center buildouts, he says. The scale of that, if you think about it, is the Hoover Dam generates two gigawatts per year. You need 50 new Hoover Dams per year to do it.The Stargate site planned for Abilene, Texas would include power from green energy sources, Barber says. Its wind and solar power in West Texas thats being used to supply power for that.Business Insider reported that developers also filed permits to operate natural gas turbines at Stargate's site in Abilene.Barber says as power gets allocated to data centers, in a broad sense, some efforts to go green are being applied. It depends on whether or not you consider nuclear green, he says. Nuclear is one option, which is not carbon-centric. Theres a lot of work going into colocated data centers in areas where solar is available, where wind is available.Barber says very few exponentials, such as Moores Law on microchips, last, but AI is now on the upslope of the performance curve of these models. Even as AI gets tested against more difficult problems, these are still the early training days in the technologys development.Related:When AI moves from training and more into inference -- where AI draws conclusions -- Barber says demand could be significantly greater, maybe even 10 times so, than with training data. Right now, the slope is driven by training, he says. As these models roll out, as people start adopting them, the demand for inference is going to pick up and the capacity is going to go into serving inference.A Nuclear Scale MatterThe world already sees very hungry AI models, says Neil Kanungo, vice president of product led growth for KX, and that demand is expected to rise. According to research released in May by the Electric Power Research Institute (EPRI), data centers currently account for about 4 percent of electricity use in the United States, and project that number could rise as high as 9.1% by 2030.While AI training drives high power consumption, Kanungo says the ubiquity of AI inference makes its draw on power is significant as well. One way to improve efficiency, he says, would be to remove the transmission side of power from the equation by placing data centers closer to power plants. You get huge efficiency gains by cutting inefficiency out, where youre having over 30% losses traditionally in power generation, Kanungo says. He is also a proponent of the use of nuclear power, considering its energy load and land usage impact. The ability to put these data centers near nuclear power plants and what youre transmitting out is not power, he says. Youre transmitting data out. Youre not having losses on data transmission.Related:Nuclear power development in the United States, he says, has seen some stalling due to negative perspectives on safety and potential environmental concerns. Rising energy demands might be a catalyst to revisit such conversations. This might be the right time to switch those perceptions, Kanungo says, because you have tech giants that are willing to take the risks and handle the waste, and go through the red tape, and make this a profitable endeavor.He believes these are still the very early stages of AI adoption and as more agents are used with LLMs -- with agents completing tasks such as shopping for users, filling out tabular data, or deep research -- more computation is needed. Were just at the tip of the iceberg of agents, Kanungo says. The use cases for these transformer-based LLMs are so great, I think the demand for them is going to continue to go up and therefore we should be investing power to ensure that youre not jeopardizing residential power youre not having blackouts, youre not stealing base load.Energy Hungry GPUsThere is an unprecedented load being put on the grid according, to Ashiss Kumar Dash, executive vice president and global head - services, utilities, resources, energy and sustainability for Infosys. He says the power conundrum as it relates to AI is three-pronged.The increase in demand for electricity, increase in demand for energy is unprecedented, Dash says. No other general-purpose technology has put this much demand in the past they say a ChatGPT query consumes 10 times the energy that a Google search would. (According to research published in 2024 by the International Energy Agency, the average electricity demand of a basic Google search without AI was 0.3 Watt hours, while the average demand of a ChatGPT request was 2.9 Wh. a typical Google search without AI is 0.3 Wh (watt-hours) of electricity, while the average electricity demand of a ChatGPT request is 2.9 Wh.)Dash also cited a CNBC documentary that posited that to train an LLM today would effectively emit as much carbon dioxide as five gas-fueled cars in their entire lifetimes. There is this dimension of unprecedented load, he says. There are energy-hungry GPUs, energy-hungry data centers, and the cloud infrastructure that it needs.The second part of the problem, Dash says, is data centers tend to be concentrated geographically. If you look at the global data centers, we have [thousands of] data centers in the world, but you can pretty much name where the data centers are, he says. Seventy percent of the worlds internet traffic goes through Virginia." According to research released in May by the Electric Power Research Institute (EPRI), data centers accounted for 25.6% of Virginia's total electricity consumption.There is some debate over the actual amount of internet traffic funneled through Virginia, with some sources, such as TeleGeography, debunking the 70% scale while Amazon affirmed that figure just last year. Regardless, Virginia is noted by Visual Capitalist for the energy consumption seen tied directly to the concentration of data centers there, as cited by Inside Climate News.That grid must obviously serve residents and local, commercial businesses, he says. When you concentrate the demand like this, its very difficult for the local grid to manage, Dash says. Same thing in Europe -- Ireland. Seventeen or 18% of Irelands electricity demand is on data centers. EPRI estimates that 20% of Ireland's electricity use is attributable to data centers.The third aspect of the problem, he says, is load growth. Utility companies tend to base their grid resiliency models on 2% to 3% maximum growth on a yearly basis, Dash says. Thats how the funding works. Thats how the rate cases are built. But now were talking, in some parts of the US, 20% growth year-on-year. Portland is going to see massive growth. California is seeing the demand.The grid and utility models are not designed to handle such fast growth, he says. For them to invest in the infrastructure and to build up transmission lines and substations and transformers is going to be a big challenge. That does not include recurring spikes in energy load in parts of the country, Dash says. If you have the data centers running at 20% higher energy demand and summer peak hits, the grid is not going to survive -- it's going to go down.However, there is some hope such outages might be avoided. AI companies, energy companies, and multiple partners are building an ecosystem to think about the problem, he says. There was even a discussion at the International Energy Agency Conference in December, he says, on using AI to work on AIs energy needs. It was good to hear tech companies, regulators, energy companies, oil and gas and utilities equally.Dash says he sees encouragement in redesigning and rethinking the grid, for example with the advent of the power usage effectiveness (PUE) metric, which can help drive more efficiency to data centers. I look at the reports and I find that quite a few organizations are able to optimize their power usage to a level where the power used for IT or tech is almost similar to the power used for the entire operations of the company, he says.Initiatives such as the creation of coolants that are more energy efficient, the creation of renewable microgrids close to data centers, and AI modeling to help utilities envision load growth are also encouraging, Dash says. Its AI solving the problem AI created.Read more about:Cost of AIAbout the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·86 Views
-
How Will International Politics Complicate US Access to AI?www.informationweek.comTechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.How Will International Politics Complicate US Access to AI?How Will International Politics Complicate US Access to AI?Are trade wars, the hunger for chipmaking materials from overseas, and the emergence of DeepSeek shaking up AI availability?Joao-Pierre S. Ruth, Senior EditorFebruary 10, 2025Sometimes "The Cost of AI" rests in the hands of political players.International politics can throw disruptive curves into companies plans and ambitions to leverage AI to remain competitive. The extent of such disruptions -- or the negotiations to avoid them -- could vary in influence based on how organizations respond.Attempts by the United States to limit Chinas access to chips produced in Asia that support AI made the arrival of DeepSeek, a seemingly lower-cost alternative to OpenAI, feel like a gamechanger. It rattled some market assumptions about pricier hardware and pointed to the potential to use alternative sources of technology to drive AI plans forward.Could global needs for AI create strange bedfellows comparable to agreements seen in the pursuit of fossil fuels? Does a path forward exist for companies stymied by politics that risk narrowing access to international resources for AI technology?Ian Cohen, CEO of Lokker; Ted Krantz, CEO of Interos; Sahil Agarwal, co-founder and CEO of Enkrypt AI; and David Brauchler, technical director and head of AI and ML security for NCC Group, discussed those and other questions in this episode of DOS Wont Hunt.Has DeepSeek changed the game in terms of materials and AI needs? Or does DeepSeek still need to be proven out before the rules of the game are rewritten?Related:Is there any sense of communication between public and private sectors to try to mitigate potential issues with international access to materials and technology for AI?Does everyone need the top tier chips and materials to support their AI efforts? Are there AI needs/functions that are NOT beholden to access to the harder to obtain chips, hardware?Listen to the full podcast here.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·84 Views
-
How to Navigate Data Governance Implementationwww.informationweek.comTyler Ditto, Principal Customer Success Manager, ImmutaFebruary 10, 20254 Min Read Panther Media GmbH via Alamy StockEstablishing a data governance framework is essential for any organization, but as technology evolves, so does the process of implementing an effective, future-proof data governance program.Whether youre just beginning to explore data governance or are looking to refine your existing framework for future scalability, heres what you can expect to face throughout your journey and how to keep all stakeholders engaged and productive.Proof of ConceptThe first step is proof of concept (POC). Before signing a contract with a data governance platform provider, you must outline the features and functionality that are must-haves to support your business value and initiatives. This may include the need for a solution that will connect with your cloud storage and compute platforms, or a tool that requires SQL expertise to work with your long-term growth goals.Clearly define the POCs scope and success criteria with decision-making stakeholders to help simplify the evaluation and transition to the next steps. The most successful organizations document their goals, understand who the stakeholders are, and define the long-term business problems they want to solve. This documentation approach helps hold everyone accountable for what needs to be achieved and what the goal line is in order to move forwardRelated:Technical Implementation/DeploymentTechnical implementation/deployment is the next phase, which focuses on designing the architecture to fit your new tool into your data ecosystem, including integrating it with the existing technologies in your tech stack.To implement your data governance solution, its key to have a strong vision of where you want to go. At this stage, its easy to want to pursue a fully integrated, end-to-end platform, but you should instead focus on integrating the technologies that are absolutely necessary to execute a minimum viable product (MVP).Sitting down with decision-makers and stakeholders, as well as the support team, helps to align the vision, roadmap and strategy for rolling out the platform. This ensures that the teams agree on how to successfully deploy a product in terms of instance sizing, integration selection and design, resource planning, and workstream prioritization.Minimum Viable ProductThe MVP phase is the pivotal point in the customer life cycle. This is where you begin to design and validate the people and processes involved in fully deploying and driving the adoption of your data governance platform. This phase determines whether an organization has a strong strategy, is prepared to scale across business units and divisions, and can navigate obstacles that may arise in the following phases.Related:The MVP will highlight any gaps in preparedness or commitment to being able to see your project through to its desired end state. Your teams, as well as the platform support team, must share a strong understanding of how data owners, stewards, governors, engineers and users will interact with data and with each other.At large enterprises, this is often the point at which the data marketplace, mesh or fabric concept is woven into the solution to ensure it is fully integrated into the organizations data strategy. Completing this phase means leveraging the newly established processes to go live with data users in production and then evaluating and optimizing deployment over the coming phases of adoption.Strategy and AdoptionNext, we move to the strategy and adoption phase. This is when you work with your governance platform support team to review your MVP in retrospect, refine processes, close any gaps in your adoption strategy, and then put it all to work.A best practice at this stage is to develop a Center of Excellence (CoE) that creates support lanes for various stakeholder groups and thus avoid bottlenecks as you scale. Your platform support team should help with this by identifying CoE leaders, training processes, and communication strategies for each business unit to build a strong foundation for successful adoption.Related:When your focus shifts to driving adoption, youll naturally start to think more long-term. This includes diligent demand planning to understand which users will adopt the product, when they will do so, and whether that aligns with your product licensing. By focusing your stakeholders and executives attention on demand planning, youll ensure that you have a clear plan for how your platform will be used in the near term and a predictive plan for how the rest of your organization will adopt and begin to realize ROI.Starting Your Data Governance ImplementationThe length of data governance implementation depends on your organization. The more cohesive your team is with your new platforms support team, the better youll be able to plan effectively and move through these phases efficiently. Work with your support team to schedule quarterly planning workshops, business reviews and program incremental planning exercises. This will set you up for long-term success by ensuring your team is aligned in understanding the roadmap and execution strategy. Then you can put your data to work and start seeing results faster and confidently.About the AuthorTyler DittoPrincipal Customer Success Manager, ImmutaTyler Ditto is a seasoned consultant with extensive experience in implementing data governance programs and technologies. Over the past 4 years with Immuta, Tyler has successfully led several projects with Immuta, helping organizations establish robust frameworks for managing, securing, and leveraging their data. With a passion for bridging the gap between business and technology, Tyler specializes in delivering practical, scalable solutions that drive organizational efficiency and compliance.See more from Tyler DittoNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments ·0 Shares ·111 Views
More Stories