0 Commentarii
0 Distribuiri
23 Views
Director
Director
-
Vă rugăm să vă autentificați pentru a vă dori, partaja și comenta!
-
WWW.FORBES.COMCustomers Are Giving Brands The Silent TreatmentHere's How To Win Them BackIncreasingly, consumers are choosing not to share their feedback with brands directlyregardless of whether theyve had a good or bad experience.0 Commentarii 0 Distribuiri 10 Views
-
WWW.TECHSPOT.COMBiological computing offers path to drastically reduced energy consumption for digital processingTL;DR: Research in both biocomputing and neuromorphic computing may hold the key to better computer energy efficiency. By drawing inspiration from nature's own efficient systems, such as the human brain, we may be able to address the growing energy demands of our increasingly digital world. As computers consume more and more electricity, scientists are turning to an unlikely inspiration for greater sustainability: the humble biological cell. This approach, known as biological computing, could slash energy consumption in computational processes.A recent article in The Conversation highlighted this concept, which draws on nature's own efficient systems to tackle one of the most pressing challenges in modern computing. As data centers and household devices gobble up roughly 3% of global electricity demand, with artificial intelligence poised to push that figure even higher, the need for energy-efficient alternatives has never been more urgent.The concept of biological computing is rooted in a principle introduced by IBM scientist Rolf Landauer in 1961. The Landauer limit states that a single computational task, such as setting a bit to zero or one, requires a minimum energy expenditure of about 10 joules (J). While this amount seems negligible, it becomes substantial when considering the billions of operations computers perform.Operating computers at the Landauer limit would theoretically make electricity consumption for computation and heat management inconsequential. However, there's a significant catch: to achieve this level of efficiency, operations would need to be performed infinitely slowly. In practice, faster computations inevitably lead to increased energy use.Current processors operate at clock speeds of billions of cycles per second, using about 10J per bit approximately ten billion times more than the Landauer limit. This high-speed operation is a result of computers working serially, executing one operation at a time.To address this energy dilemma, researchers are exploring a fundamentally different computer design based on massively parallel processing. Instead of relying on a single high-speed "hare" processor, this approach proposes using billions of slower "tortoise" processors, each taking a full second to complete its task. This could theoretically allow computers to operate near the Landauer limit, using orders of magnitude less energy than current systems.One promising implementation of this idea is network-based biocomputation, which harnesses the power of biological motor proteins nature's own nanoscale machines. This system involves encoding computational tasks into nanofabricated mazes of channels, typically made of polymer patterns deposited on silicon wafers. Biofilaments, powered by motor proteins, explore all possible paths through the maze simultaneously. // Related StoriesEach biofilament is just a few nanometres in diameter and about a micrometer long, acting as an individual "computer" by encoding information through its spatial position in the maze. This architecture is particularly suitable for solving combinatorial problems, which are computationally demanding for serial computers.Experiments have shown that such biocomputers require between 1,000 and 10,000 times less energy per computation than electronic processors. This efficiency stems from the evolved nature of biological motor proteins, which use only the energy necessary to perform their tasks at the required rate typically a few hundred steps per second, a million times slower than transistors.Significant progress has been made in this field recently. Heiner Linke, Professor of Nanophysics at Lund University and author of the article in The Conversation, also co-authored a 2023 paper that demonstrated the possibility of operating a computer near the Landauer limit. This breakthrough brings us closer to realizing the potential of ultra-low-energy computing.While the concept of biocomputation is promising, challenges remain in scaling up these systems to compete with electronic computers in terms of speed and computational power. Researchers must overcome obstacles such as precisely controlling biofilaments, reducing error rates, and integrating these systems with current technology.If these hurdles can be surmounted, the resulting processors could solve certain types of challenging computational problems with a drastically reduced energy cost. This breakthrough could have far-reaching implications for the future of computing and its environmental impact.As an alternative approach, researchers are also exploring neuromorphic computing, which attempts to emulate the highly interconnected architecture of the human brain. While the basic physical elements of the brain may not be inherently more energy-efficient than transistors, its unique structure and operation offer intriguing possibilities for energy-efficient computing.0 Commentarii 0 Distribuiri 10 Views
-
WWW.TECHSPOT.COMNetflix is suing Broadcom's VMware over virtual machine patentsWhat just happened? Netflix is suing Broadcom, alleging infringement of multiple patents related to virtual machine operations. The video streaming giant alleges that VMware products like vSphere and their cloud solutions violate up to five Netflix patents related to managing and optimizing virtual machines. The patents in question cover some critical behind-the-scenes tech that helps keep virtual machines running smoothly, according to the lawsuit filed in a California federal court. Three of the patents deal with tracking and allocating CPU resources to virtual machines efficiently. The other two describe methods for a load balancer to seamlessly start up virtual machines on physical servers as needed.Netflix says VMware's virtualization tech flat-out uses these patented innovations without permission. They claim "VMware has infringed and continues to infringe" on these patents through products like vSphere Foundation, VMware Cloud Foundation, and their cloud offerings for AWS, Azure, Google Cloud, and more.Netflix also asserts that VMware knew it was potentially infringing as far back as 2012 when some of these patents came up during one of the company's own patent applications. Netflix says the infringement has been "willful and deliberate" after VMware had this knowledge.Reuters reports that Netflix now wants VMware's new owner Broadcom, who bought them last year for a massive $69 billion, to award monetary damages.It's worth mentioning that this patent brawl actually has roots going back to 2018, when Broadcom first sued Netflix claiming it infringed on Broadcom patents for video streaming technology. That legal fight spans across multiple countries, including the US, Germany, and the Netherlands. The US lawsuit is slated for trial next June. // Related StoriesPrevious reports suggested that Broadcom's 2018 lawsuit came as a result of Netflix's meteoric growth during the Covid-19 pandemic when viewers flocked to streaming services. This boom came at Broadcom's expense, with dwindling sales of its TV set-top box chips as cable subscriptions declined. Broadcom also has a history with patent infringements: In 2017, it sued LG, Vizio, and other smart TV manufacturers as well as rival Mediatek for patent violations.VMware's software powers huge swaths of enterprise data centers and clouds so Netflix's offensive could have major ramifications if their patent claims prevail. Both sides are yet to issue comments.0 Commentarii 0 Distribuiri 10 Views
-
WWW.DIGITALTRENDS.COMLike Nosferatu? Then watch these 3 movies right nowTable of ContentsTable of ContentsNosferatu the Vampyre (1979)Bram Stokers Dracula (1992)Shadow of the Vampire (2000)This Christmas, the multiplex will be invaded by something other than Wickeds airborne witches and speedy hedgehogs. A vampire is coming down the chimney, and he promises to scare the pants off you. Robert Eggers reimagining of Nosferatu has already accumulated raves from critics and is one of the most anticipated movies of the holiday season.If you liked the atmospheric horror film starring The Orders Nicholas Hoult, Lily-Rose Depp, and Bill Skarsgrd, then youre reading the right article. The following is a brief list of worthy movies you should watch if youre eager to see more bloodsucking this year or the next.Recommended Videos20th Century StudiosIf you like a remake, then its only natural to be curious about the original, right? When I first saw John Carpenters masterful take on The Thing, I immediately sought out the Howard Hawkes-produced 1951 original just to see how it compared with the version I had just watched. And while F.W. Murnaus 1922 silent film is undeniably a classic, its also a creaky one. Its dated, to put it mildly, and besides, Ive always preferred Werner Herzogs haunting 1979 remake, Nosferatu, which is creepier and better. Its one of the few horror movies that makes you actually feel the dread of death.RelatedThe story is pretty much the same: A young man travels to Transylvania to see a reclusive client and finds himself in the thrall of Count Dracula. He escapes, Van Helsing shows up, and his wife Lucy is hunted by the vampire. Yet Herzog throws a few surprises into his narrative, including an ending that puts a downbeat spin on Stokers more positive conclusion, and he emphasizes atmosphere above all else. This is a movie where you can feel the decay of all the bodies onscreen. Oh, and if youre squeamish about rats, its best to avoid this one altogether.Klaus Kinski is Count Dracula - Nosferatu (1979)The great French actress Isabelle Adjani is Lucy, and the psychotic German actor Klaus Kinski is the Count. Both were born to play these roles and give the picture a timeless, mythic quality thats downright lyrical. The brooding score is by Popol Vuh, and it evokes images of bare tree branches, gray skies populated by black crows, and empty tables covered in vermin. Its glorious.Nosferatu the Vampyre is streaming on Tubi.Bram Stoker's Dracula (1992) - Jonathan Harker Meets Dracula Scene | MovieclipsRobert Eggers re-imagining of the classic vampire tale owes its biggest debt not to its namesake but rather to Bram Stokers Dracula. Both movies emphasize mood over outright horror, and both have bold, ambitious directors behind the camera. Megalopolis auteur Francis Ford Coppola is the filmmaker who made this, and it has the excessiveness and boldness that powered his earlier classics like Apocalypse Now and even One From the Heart.Gary Oldman is the Count in this one, and here, hes presented as a tragic anti-hero who just wants a little love in his life. Well, one in particular: Mina Murray, and since shes played by Winona Ryder, can you blame him? Coppola serves up gothic romance with all the trimmings: star-crossed lovers separated by oceans of time; a heroine with skin so pale you can literally see her heart beating through her chest at one point; and a love triangle so lopsided that it might as well be a circle.20th Century FoxThe actors are all fine, but Draculas real star is the production itself. The Oscar-winning costumes by Eiko Ishioka are genuinely weird and beautiful, while the set design and VFX all emphasize practicality over computer-generated nonsense. The score by Polish composer Wojciech Kilar recalls European decadence and the death of an old, dark world blighted by the light of progress and technology. By the end, theres a sense of something passing, and its not just the Count getting his head chopped off.Bram Stokers Dracula is streaming on Tubi.Shadow of The Vampire (2000) Official TrailerWho was Max Schreck? The German actor, who first embodied the titular Nosferatu in F.W. Murnaus film, didnt have much of a career before or after it, and he died in relative obscurity in 1936. The 2000 film Shadow of the Vampire depicts the making of Murnaus classic horror movie and puts forth this interesting theory: What if Max really was a vampire, and the director and his cast and crew didnt know it until midway through the production?Its an intriguing idea, if an absurd one, but director E. Elias Merhige milks it for all its worth in his darkly comic tale of a movie shoot that is constantly threatening to go off the rails. John Malkovich stars as a Murnau who is willing to do anything, even murder, to get the perfect shot, while Willem Dafoe gives an Oscar-nominated performance as Schreck, who brings new meaning to the term method acting. Its a deeply unsettling performance, but then, what else would you expect?Shadow of the Vampire can be rented or purchased on Amazon Prime Video.Editors Recommendations0 Commentarii 0 Distribuiri 12 Views
-
WWW.WSJ.COMWhat CIOs Read in 2024This years picks leaned heavily into timeless and often tech-immune topics, from leadership and teamwork to the art and science of doing less.0 Commentarii 0 Distribuiri 10 Views
-
WWW.WSJ.COMSekisui Chemical to Mass-Produce Solar Films in $2 Billion ProjectThe Japanese chemical company announced Thursday that it will establish a subsidiary in January to produce perovskite solar cells, which are light and flexible.0 Commentarii 0 Distribuiri 10 Views
-
ARSTECHNICA.COM2024: The year AI drove everyone crazythe year in review 2024: The year AI drove everyone crazy What do eating rocks, rat genitals, and Willy Wonka have in common? AI, of course. Benj Edwards Dec 26, 2024 7:00 am | 1 Credit: GeorgePeters / imaginima via Getty Images Credit: GeorgePeters / imaginima via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt's been a wild year in tech thanks to the intersection between humans and artificial intelligence. 2024 brought a parade of AI oddities, mishaps, and wacky moments that inspired odd behavior from both machines and man. From AI-generated rat genitals to search engines telling people to eat rocks, this year proved that AI has been having a weird impact on the world.Why the weirdness? If we had to guess, it may be due to the novelty of it all. Generative AI and applications built upon Transformer-based AI models are still so new that people are throwing everything at the wall to see what sticks. People have been struggling to grasp both the implications and potential applications of the new technology. Riding along with the hype, different types of AI that may end up being ill-advised, such as automated military targeting systems, have also been introduced.It's worth mentioning that aside from crazy news, we saw fewer weird AI advances in 2024 as well. For example, Claude 3.5 Sonnet launched in June held off the competition as a top model for most of the year, while OpenAI's o1 used runtime compute to expand GPT-4o's capabilities with simulated reasoning. Advanced Voice Mode and NotebookLM also emerged as novel applications of AI tech, and the year saw the rise of more capable music synthesis models and also better AI video generators, including several from China.But for now, let's get down to the weirdness.ChatGPT goes insane Credit: Benj Edwards / Getty Images Early in the year, things got off to an exciting start when OpenAI's ChatGPT experienced a significant technical malfunction that caused the AI model to generate increasingly incoherent responses, prompting users on Reddit to describe the system as "having a stroke" or "going insane." During the glitch, ChatGPT's responses would begin normally but then deteriorate into nonsensical text, sometimes mimicking Shakespearean language.OpenAI later revealed that a bug in how the model processed language caused it to select the wrong words during text generation, leading to nonsense outputs (basically the text version of what we at Ars now call "jabberwockies"). The company fixed the issue within 24 hours, but the incident led to frustrations about the black box nature of commercial AI systems and users' tendency to anthropomorphize AI behavior when it malfunctions.The great Wonka incident A photo of "Willy's Chocolate Experience" (inset), which did not match AI-generated promises, shown in the background. Credit: Stuart Sinclair The collision between AI-generated imagery and consumer expectations fueled human frustrations in February when Scottish families discovered that "Willy's Chocolate Experience," an unlicensed Wonka-ripoff event promoted using AI-generated wonderland images, turned out to be little more than a sparse warehouse with a few modest decorations.Parents who paid 35 per ticket encountered a situation so dire they called the police, with children reportedly crying at the sight of a person in what attendees described as a "terrifying outfit." The event, created by House of Illuminati in Glasgow, promised fantastical spaces like an "Enchanted Garden" and "Twilight Tunnel" but delivered an underwhelming experience that forced organizers to shut down mid-way through its first day and issue refunds.While the show was a bust, it brought us an iconic new meme for job disillusionment in the form of a photo: the green-haired Willy's Chocolate Experience employee who looked like she'd rather be anywhere else on earth at that moment.Mutant rat genitals expose peer review flaws An actual laboratory rat, who is intrigued. Credit: Getty | Photothek In February, Ars Technica senior health reporter Beth Mole covered a peer-reviewed paper published in Frontiers in Cell and Developmental Biology that created an uproar in the scientific community when researchers discovered it contained nonsensical AI-generated images, including an anatomically incorrect rat with oversized genitals. The paper, authored by scientists at Xi'an Honghui Hospital in China, openly acknowledged using Midjourney to create figures that contained gibberish text labels like "Stemm cells" and "iollotte sserotgomar."The publisher, Frontiers, posted an expression of concern about the article titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" and launched an investigation into how the obviously flawed imagery passed through peer review. Scientists across social media platforms expressed dismay at the incident, which mirrored concerns about AI-generated content infiltrating academic publishing.Chatbot makes erroneous refund promises for Air Canada Credit: Alvin Man | iStock Editorial / Getty Images Plus If, say, ChatGPT gives you the wrong name for one of the seven dwarves, it's not such a big deal. But in February, Ars senior policy reporter Ashley Belanger covered a case of costly AI confabulation in the wild. In the course of online text conversations, Air Canada's customer service chatbot told customers inaccurate refund policy information. The airline faced legal consequences later when a tribunal ruled the airline must honor commitments made by the automated system. Tribunal adjudicator Christopher Rivers determined that Air Canada bore responsibility for all information on its website, regardless of whether it came from a static page or AI interface.The case set a precedent for how companies deploying AI customer service tools could face legal obligations for automated systems' responses, particularly when they fail to warn users about potential inaccuracies. Ironically, the airline had reportedly spent more on the initial AI implementation than it would have cost to maintain human workers for simple queries, according to Air Canada executive Steve Crocker.Will Smith lampoons his digital double The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. Credit: Will Smith / Getty Images / Benj Edwards In March 2023, a terrible AI-generated video of Will Smith's AI doppelganger eating spaghetti began making the rounds online. The AI-generated version of the actor gobbled down the noodles in an unnatural and disturbing way. Almost a year later, in February 2024, Will Smith himself posted a parody response video to the viral jabberwocky on Instagram, featuring AI-like deliberately exaggerated pasta consumption, complete with hair-nibbling and finger-slurping antics.Given the rapid evolution of AI video technology, particularly since OpenAI had just unveiled its Sora video model four days earlier, Smith's post sparked discussion in his Instagram comments where some viewers initially struggled to distinguish between the genuine footage and AI generation. It was an early sign of "deep doubt" in action as the tech increasingly blurs the line between synthetic and authentic video content.Robot dogs learn to hunt people with AI-guided rifles A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries. Credit: Onyx Industries At some point in recent historysomewhere around 2022someone took a look at robotic quadrupeds and thought it would be a great idea to attach guns to them. A few years later, the US Marine Forces Special Operations Command (MARSOC) began evaluating armed robotic quadrupeds developed by Ghost Robotics. The robot "dogs" integrated Onyx Industries' SENTRY remote weapon systems, which featured AI-enabled targeting that could detect and track people, drones, and vehicles, though the systems require human operators to authorize any weapons discharge.The military's interest in armed robotic dogs followed a broader trend of weaponized quadrupeds entering public awareness. This included viral videos of consumer robots carrying firearms, and later, commercial sales of flame-throwing models. While MARSOC emphasized that weapons were just one potential use case under review, experts noted that the increasing integration of AI into military robotics raised questions about how long humans would remain in control of lethal force decisions.Microsoft Windows AI is watching A screenshot of Microsoft's new "Recall" feature in action. Credit: Microsoft In an era where many people already feel like they have no privacy due to tech encroachments, Microsoft dialed it up to an extreme degree in May. That's when Microsoft unveiled a controversial Windows 11 feature called "Recall" that continuously captures screenshots of users' PC activities every few seconds for later AI-powered search and retrieval. The feature, designed for new Copilot+ PCs using Qualcomm's Snapdragon X Elite chips, promised to help users find past activities, including app usage, meeting content, and web browsing history.While Microsoft emphasized that Recall would store encrypted snapshots locally and allow users to exclude specific apps or websites, the announcement raised immediate privacy concerns, as Ars senior technology reporter Andrew Cunningham covered. It also came with a technical toll, requiring significant hardware resources, including 256GB of storage space, with 25GB dedicated to storing approximately three months of user activity. After Microsoft pulled the initial test version due to public backlash, Recall later entered public preview in November with reportedly enhanced security measures. But secure spyware is still spywareRecall, when enabled, still watches nearly everything you do on your computer and keeps a record of it.Google Search told people to eat rocks This is fine. Credit: Getty Images In May, Ars senior gaming reporter Kyle Orland (who assisted commendably with the AI beat throughout the year) covered Google's newly launched AI Overview feature. It faced immediate criticism when users discovered that it frequently provided false and potentially dangerous information in its search result summaries. Among its most alarming responses, the system advised humans could safely consume rocks, incorrectly citing scientific sources about the geological diet of marine organisms. The system's other errors included recommending nonexistent car maintenance products, suggesting unsafe food preparation techniques, and confusing historical figures who shared names.The problems stemmed from several issues, including the AI treating joke posts as factual sources and misinterpreting context from original web content. But most of all, the system relies on web results as indicators of authority, which we called a flawed design. While Google defended the system, stating these errors occurred mainly with uncommon queries, a company spokesperson acknowledged they would use these "isolated examples" to refine their systems. But to this day, AI Overview still makes frequent mistakes.Stable Diffusion generates body horror An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass. Credit: HorneyMetalBeing In June, Stability AI's release of the image synthesis model Stable Diffusion 3 Medium drew criticism online for its poor handling of human anatomy in AI-generated images. Users across social media platforms shared examples of the model producing what we now like to call jabberwockiesAI generation failures with distorted bodies, misshapen hands, and surreal anatomical errors, and many in the AI image-generation community viewed it as a significant step backward from previous image-synthesis capabilities.Reddit users attributed these failures to Stability AI's aggressive filtering of adult content from the training data, which apparently impaired the model's ability to accurately render human figures. The troubled release coincided with broader organizational challenges at Stability AI, including the March departure of CEO Emad Mostaque, multiple staff layoffs, and the exit of three key engineers who had helped develop the technology. Some of those engineers founded Black Forest Labs in August and released Flux, which has become the latest open-weights AI image model to beat.ChatGPT Advanced Voice imitates human voice in testing Credit: Ole_CNX via Getty Images AI voice-synthesis models are master imitators these days, and they are capable of much more than many people realize. In August, we covered a story where OpenAI's ChatGPT Advanced Voice Mode feature unexpectedly imitated a user's voice during the company's internal testing, revealed by OpenAI after the fact in safety testing documentation. To prevent future instances of an AI assistant suddenly speaking in your own voice (which, let's be honest, would probably freak people out), the company created an output classifier system to prevent unauthorized voice imitation. OpenAI says that Advanced Voice Mode now catches all meaningful deviations from approved system voices.Independent AI researcher Simon Willison discussed the implications with Ars Technica, noting that while OpenAI restricted its model's full voice synthesis capabilities, similar technology would likely emerge from other sources within the year. Meanwhile, the rapid advancement of AI voice replication has caused general concern about its potential misuse, although companies like ElevenLabs have already been offering voice cloning services for some time.San Francisco's robotic car horn symphony A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024. Credit: Getty Images In August, San Francisco residents got a noisy taste of robo-dystopia when Waymo's self-driving cars began creating an unexpected nightly disturbance in the South of Market district. In a parking lot off 2nd Street, the cars congregated autonomously every night during rider lulls at 4 am and began engaging in extended honking matches at each other while attempting to park.Local resident Christopher Cherry's initial optimism about the robotic fleet's presence dissolved as the mechanical chorus grew louder each night, affecting residents in nearby high-rises. The nocturnal tech disruption served as a lesson in the unintentional effects of autonomous systems when run in aggregate.Larry Ellison dreams of all-seeing AI cameras Credit: Benj Edwards / Mike Kemp via Getty Images In September, Oracle co-founder Larry Ellison painted a bleak vision of ubiquitous AI surveillance during a company financial meeting. The 80-year-old database billionaire described a future where AI would monitor citizens through networks of cameras and drones, asserting that the oversight would ensure lawful behavior from both police and the public.His surveillance predictions reminded us of parallels to existing systems in China, where authorities already used AI to sort surveillance data on citizens as part of the country's "sharp eyes" campaign from 2015 to 2020. Ellison's statement reflected the sort of worst-case tech surveillance state scenariolikely antithetical to any sort of free societythat dozens of sci-fi novels of the 20th century warned us about.A dead father sends new letters home An AI-generated image featuring my late father's handwriting. Credit: Benj Edwards / Flux AI has made many of us do weird things in 2024, including this writer. In October, I used an AI synthesis model called Flux to reproduce my late father's handwriting with striking accuracy. After scanning 30 samples from his engineering notebooks, I trained the model using computing time that cost less than five dollars. The resulting text captured his distinctive uppercase style, which he developed during his career as an electronics engineer.I enjoyed creating images showing his handwriting in various contexts, from folder labels to skywriting, and made the trained model freely available online for others to use. While I approached it as a tribute to my father (who would have appreciated the technical achievement), many people found the whole experience weird and somewhat disturbing. The things we unhinged Bing Chat-like journalists do to bring awareness to a topic are sometimes unconventional. So I guess it counts for this list!For 2025? Expect even more AIThanks for reading Ars Technica this past year and following along with our team coverage of this rapidly emerging and expanding field. We appreciate your kind words of support. Ars Technica's 2024 AI words of the year were: vibemarking, deep doubt, and the aforementioned jabberwocky. The old stalwart "confabulation" also made several notable appearances. Tune in again next year when we continue to try to figure out how to concisely describe novel scenarios in emerging technology by labeling them.Looking back, our prediction for 2024 in AI last year was "buckle up." It seems fitting, given the weirdness detailed above. Especially the part about the robot dogs with guns. For 2025, AI will likely inspire more chaos ahead, but also potentially get put to serious work as a productivity tool, so this time, our prediction is "buckle down."Finally, we'd like to ask: What was the craziest story about AI in 2024 from your perspective? Whether you love AI or hate it, feel free to suggest your own additions to our list in the comments. Happy New Year!Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 1 Comments0 Commentarii 0 Distribuiri 10 Views
-
WWW.INFORMATIONWEEK.COMBest Practices for Managing Hybrid Cloud Data GovernanceNathan Eddy, Freelance WriterDecember 26, 20244 Min ReadAndrey Piza via Alamy Stock The acceleration of hybrid-cloud adoption means organizations must refine data governance strategies to address growing complexity and ensure seamless operations across environments.A unified approach to monitoring and containerization will play a critical role in enhancing data portability and maintaining consistency across diverse cloud ecosystems.Data governance will increasingly rely on emerging technologies to manage and secure data effectively.A hybrid organizational approach can effectively balance centralized and decentralized data governance.Nick Elsberry, leader of software technology consulting at Xebia, recommends establishing a central data governance team to lead the program.This team should gather requirements from decentralized teams, set policies and guidelines, purchase and provide data management tools, and educate and coach decentralized teams, he says.The central team must have a strong mandate and be backed by senior management.Elsberry says regular exposure to senior management and a standing bi-monthly data governance board meeting where the central team sets the agenda is also important.AI Tools Come OnlineMeanwhile, AI-driven tools are also set to transform data governance capabilities by automating routine processes and enhancing decision-making.Related:Ari Weil, cloud evangelist for Akamai, says the biggest impact AI tools are having on data governance comes from their ability to automate processes and to dynamically ensure compliance with regulations.They can quickly scan and categorize data to identify whats subject to specific laws, such as GDPR or HIPAA, and identify where to apply or enforce policies accordingly, he says. This not only speeds up compliance but also reduces human error.One of the challenges, however, is integrating these tools with existing systems, especially if the data isnt well-organized, tagged, or if storage solutions combine data from multiple regions, making it much more difficult for AI-powered tools to accurately identify and manage data.Weaving Data FabricWeil notes the rise of data fabric solutions, which unify data management across disparate sources, will enable organizations to maintain visibility and control over their data, regardless of its location.This unified framework will be particularly valuable in hybrid-cloud environments where data often resides across on-premises systems, public clouds, and edge devices, he says via email.Kevin Epstein, director of customer solutions at ClearScale, says for organizations that have data in multiple locations -- multi-cloud deployments, hybrid deployments, or even just multiple physical data centers, data fabric is critical.Related:It doesnt necessarily mean all data lives in a single location, because data virtualization allows us to leave source data where its at, while still making it available to our data platforms through data virtualization.It makes data more discoverable within organizations and enables better governance, he explains.A Holistic Monitoring ApproachKausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise.To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility, he says.End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient.He adds collaboration among IT, DevOps, and security teams ensures the effective use of monitoring tools -- transforming data into actionable insights for improved performance and user satisfaction.From the perspective of Kevin Epstein, the best strategy is to keep things as simple as possible.Related:Try not to use multiple different tools, because then your monitoring project becomes an integration project and shifts the focus away from what youre actually trying to do, he says via email.Another key recommendation is to avoid the temptation to monitor everything.Youll just end up with alerts that nobody evertakes action against, Epstein says. When this happens its inevitable that important alerts also get ignored and overlooked.Legacy Systems, Modern Data Governance ToolsIntegrating legacy systems with modern data governance solutions involves several steps.Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems.However, this metadata is often absent or limited in scope within legacy systems, says Elsberry.Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs.Elsberry notes a common blocking issue is the lack of REST API integration.Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration.Gradually updating legacy systems to support modern data governance requirements is also essential, he says.Data Governance Goes InternationalWhen operating in different countries or continents, organizations must navigate various data governance regulations.Elsberry says the first step is to conduct a thorough review and analysis of these regulations to extract the detailed requirements applicable to the organization.Often, this analysis will reveal overlapping areas and significant similarities between different jurisdictions, he explains.These can be addressed with generic data governance policies and practices that apply to the entire organization.For specific requirements applicable in a smaller context, it is necessary to add policies and practices that cover those more specific needs.Leveraging compliance management tools can also help monitor and enforce regulations consistently, Elsberry says.About the AuthorNathan EddyFreelance WriterNathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.See more from Nathan EddyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Commentarii 0 Distribuiri 10 Views
-
WWW.INFORMATIONWEEK.COMSecure By Demand: Key Principles for Vendor AssessmentsSteve Cobb, CISO, SecurityScorecardDecember 26, 20244 Min Read Tero Vesalainen via Alamy StockIn today's interconnected world, the software supply chain is a vast network of fragile connections that has become a prime target for cybercriminals. The complex nature of the software supply chain, with its numerous components and dependencies, makes it vulnerable to exploitation. Organizations rely on software from numerous vendors, each with its own security posture, which can expose them to risk if not properly managed.The Cybersecurity and Infrastructure Security Agency (CISA) recently published a comprehensive Secure by Demand Guide: How Software Customers Can Drive a Secure Technology Ecosystem to help organizations understand how to secure their software supply chains effectively. With both vendors and threat actors increasingly leveraging AI, this guide is a timely resource for organizations seeking to more effectively navigate their software vendor relationships.Importance of Securing the Software Supply ChainSupply chain attacks, such as the infamous Change Healthcare and CDK Global breaches, highlight the critical importance of securing the software supply chain. It represents a significant risk to every organization given that a single vulnerability can have a domino effect that compromises the entire chain. These attacks can have devastating consequences, including data breaches, operational disruptions, regulatory penalties, and irreparable reputational damage.Related:CISA's guide serves as an excellent foundation for organizations needing to implement a robust software supply chain security strategy. These best practices are particularly valuable for public companies required to report material cyberattacks to the SEC. The top three takeaways for organizations are:1. Embracing radical transparency: CISA urges vendors to embrace radical transparency, providing a comprehensive and open view of their security practices, vulnerabilities, methodologies, data, and guiding principles.2. Taking ownership of security outcomes: Vendors must be accountable for the security outcomes of their software. By having visibility into both their own security posture and that of their vendors, organizations can identify vulnerabilities and take corrective actions.3. Make security a team effort: Ensure that the organization's security objectives are clearly defined and communicated to all employees. Cybersecurity should not be treated as an individual responsibility but rather as a company-wide priority, just like other critical business functions.Mastering Vendor AssessmentsRelated:Recent research from SecurityScorecard found that 99% of Global 2000 companies have been directly connected to a supply chain breach. These incidents can be extremely costly, with remediation and management costs 17 times higher than first-party breaches. To mitigate these risks, organizations must prioritize thorough vendor assessments. Vendor assessments can be time-consuming, but they are just as important as ensuring your own company's security. Several key processes to consider include:Conducting regular vendor assessments: First and foremost, a vendor assessment doesn't work if you only do it once in a blue moon. Continuously assess the security postures of your vendors to ensure that they comply with industry security standards and that their software does not expose your organization to vulnerabilities. This includes conducting regular security audits, reviewing vendor security practices, and assessing their incident response capabilities.Demand secure-by-design products: Make "secure by design" a non-negotiable. Prioritize vendors who embed security into every phase of the product life cycle, ensuring it's a core consideration from development to deployment, not an afterthought.Implement strong vendor management policies: Develop a comprehensive vendor management policy that includes onboarding procedures, continuous monitoring, and guidelines for security expectations throughout the vendor relationship. This policy should outline the security requirements that vendors must meet and establish clear communication channels for reporting and addressing security issues.Related:Ensure limited access and privileges: Operate on a principle of least privilege with vendors. Grant them only the minimum access and permissions needed to fulfill their tasks. Overprovisioning access can widen your attack surface significantly. Implement robust access controls and conduct regular reviews to ensure only authorized personnel have access to sensitive systems and data.Monitor for vulnerabilities and weaknesses: Actively monitor for new vulnerabilities in software provided by your vendors. Utilize automated tools to detect vulnerabilities and respond swiftly to reduce exposure. Stay informed about emerging threats and industry best practices to ensure your organization is prepared to address new challenges.Securing the Future of the Supply ChainThe supply chain breaches at Change Healthcare and CDK Global demonstrate the devastating consequences of neglecting software supply chain security. These attacks can result in billions of dollars in losses, months of operational disruption, irreparable damage to reputation, legal ramifications, regulatory fines, and loss of customer trust. Moreover, recovery efforts, such as forensic investigations and system restorations, require substantial resources.Collaboration is important in any industry, but in today's age of increasing nation-state threat actors and even individual hackers in their parent's garage, collaboration and information sharing among cybersecurity professionals is vital. By aligning with Secure by Demand principles, utilizing continuous monitoring, and implementing a culture of transparency, organizations can strengthen their defenses and significantly reduce the risk of supply chain attacks.About the AuthorSteve CobbCISO, SecurityScorecardSteve Cobb is SecurityScorecards chief information security officer bringing more than 25 years of leadership consulting surrounding IT infrastructure, cybersecurity, incident response, and cyber threat intelligence Prior to SecurityScorecard, he was a senior security engineer with Verizon Managed Security and a senior escalation engineer with Microsoft.See more from Steve CobbNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Commentarii 0 Distribuiri 10 Views