0 Comments
0 Shares
33 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.WSJ.COMMeta Trials eBay Listings on Facebook Marketplace Following EU Antitrust PressureThe social media giant will let some users browse eBay listings on its Facebook Marketplace platform after the EU ruled that the link between its classified-ads service and flagship social network undermined competition.0 Comments 0 Shares 31 Views
-
WWW.WSJ.COMMusk Vaulted to the Top of a Popular Videogame. Everyones Asking Where He Found the Time.The head of six companies says he recently became one of the worlds top Diablo IV players, a milestone gamers say would have required playing all day, every day0 Comments 0 Shares 31 Views
-
WWW.INFORMATIONWEEK.COMWho Should Lead the AI Conversation in the C-Suite?Many executives have opinions on the technology and its use, but does that translate into understanding of the opportunities and capabilities of AI?0 Comments 0 Shares 27 Views
-
WWW.INFORMATIONWEEK.COMWhat CISOs Think About GenAILisa Morgan, Freelance WriterJanuary 8, 20257 Min ReadMilan Surkala via Alamy StockGenAI is everywhere -- available as a standalone tool, proprietary LLMs or embedded in applications. Since everyone can easily access it, it also presents security and privacy risks, so CISOs are doing what they can to stay up on it while protecting their companies with policies.As a CISO who has to approve an organizations usage of GenAI, I need to have a centralized governance framework in place, says Sammy Basu CEO & founder of cybersecurity solution provider Careful Security.We need to educate employees about what information they can enter into AI tools, and they should refrain from uploading client confidential or restricted information because we dont have clarity on where the data may land up.Specifically, Basu created security policies and simple AI dos and donts addressing AI usage for Careful Security clients. As is typical these days, people are uploading information into AI models to stay competitive. However, Basu says a regular user would need security gateways built into their AI tools to identify and redact sensitive information. In addition, GenAI IP laws are ambiguous, so its not always clear who owns the copyright of AI generated content that has been altered by a human.From Cautious Curiosity to Risk-Aware AdoptionRelated:Ed Gaudet, CEO and founder of healthcare risk management solution provider Censinet says over the years as a user and as a CISO, his GenAI experience has transitioned from cautious curiosity to a more structured, risk-aware adoption of GenAI capabilities.It is undeniable that GenAI opens a vast array of opportunities, though careful planning and continuous learning remain critical to contain the risks that it brings, says Gaudet. I was initially cautious about GenAI at the start because of the privacy of data, IP protection and misuse. Early versions of GenAI tools, for instance, highlighted how input data was stored or used for further training. But as the technology has improved and providers have put better safeguards in place -- opt-out data and secure APIs -- I have come to see what it can do when used responsibly.Gaudet believes sensitive or proprietary data should never be input into GenAI systems, such as OpenAI or proprietary LLMs. He has also made it mandatory for teams to use only vetted and authorized tools, preferably those that run on secure, on-premises environments to reduce data exposure.Ed Gaudet, CensinetOne of the significant challenges has been educating non-technical teams on these policies, says Gaudet. GenAI is considered a black box solution by many users, and they do not always understand all the potential risks associated with data leaks or the creation of misinformation.Related:Patricia Thaine, co-founder and CEO at data privacy solution provider Private AI, says curating data for machine learning is complicated enough without having to additionally think about access controls, purpose limitation, and the security of personal and confidential company information going to third parties.This was never going to be an easy task, no matter when it happened, says Thaine. The success of this gargantuan endeavor depends almost entirely on whether organizations can maintain trust with proper AI governance in place and whether we have finally understood just how fundamentally important meticulous data curation and quality annotations are, regardless of how large a model we throw at a task.The Risks Can Outweigh the BenefitsMore workers are using GenAI for brainstorming, generating content, writing code, research, and analysis. While it has the potential to provide valuable contributions to various workflows as it matures, too much can go wrong without the proper safeguards.As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards, says Harold Rivas, CISO at global cybersecurity company Trellix. Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution. Related:However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud.Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year.The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary, says Nag. We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.The standards landscape has also evolved. The release of NIST's AI Risk Management Framework and concrete guidance from major cloud providers on AI governance, provide clear frameworks to audit against.We've implemented these controls within our existing security architecture, treating AI much like any other data-processing capability that requires appropriate safeguards. From a practical standpoint, we're now running different AI workloads based on data sensitivity, says Nag. Public-facing functions might leverage cloud APIs with appropriate controls, while sensitive data processing happens exclusively on private infrastructure using our own models. This tiered approach lets us maximize utility while maintaining strict control over sensitive data.Dev Nag, QueryPalThe rise of enterprise-grade AI platforms with SOC 2 compliance, private instances and no data retention policies has also expanded QueryPals options for semi-sensitive workloads.When combined with proper data classification and access controls, these platforms can be safely integrated into many business processes. That said, we maintain rigorous monitoring and access controls around all AI systems, says Nag. We treat model inputs and outputs as sensitive data streams that need to be tracked, logged and audited. Our incident response procedures specifically account for AI-related data exposure scenarios, and we regularly test these procedures.GenAI Is Improving Cybersecurity Detection and ResponseGreg Notch, CISO at managed detection and responseservice provider Expel, says GenAIs ability to quickly explain what happened during a security incident to both SOC analysts and impacted parties goes a long way toward improving efficiency and increasing accountability in the SOC.[GenAI] is already proving to be a game-changer for security operations, says Notch. As AI technologies flood the market, companies face the dual challenge of evaluating these tools' potential and managing risks effectively. CISOs must cut through the noise of various GenAI technologies to identify actual risks and align security programs accordingly investing significant time and effort into crafting policies, assessing new tools and helping the business understand tradeoffs. Plus, training cybersecurity teams to assess and use these tools is essential, albeit costly. It's simply the cost of doing business with GenAI.Adopting AI tools can also inadvertently shift a company's security perimeter, making it crucial to educate employees about the risks of sharing sensitive information with GenAI tools both in their professional and personal lives. Clear acceptable use policies or guardrails should be in place to guide them.The real game-changer is outcome-based planning, says Notch. Leaders should ask, What results do we need to support our business goals? What security investments are required to support these goals? And do these align with our budget constraints and business objectives? This might involve scenario planning, imagining the costs of potential data loss, legal costs and other negative business impacts as well as prevention measures, to ensure budgets cover both immediate and future security needs.Scenario-based budgets help organizations allocate resources thoughtfully and proactively, maximizing long-term value from AI investments and minimizing waste. Its about being prepared, not panicked, he says.Concentrating on basic security hygiene is the best way to protect your organization, says Notch. The No. 1 danger is letting unfounded AI threats distract organizations from hardening their standard security practices. Craft a plan for when an attack is successful whether AI was a factor or not. Having visibility and a way to remediate is crucial for when, not if, an attacker succeeds.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Comments 0 Shares 29 Views
-
WWW.NEWSCIENTIST.COMNew Glenn launch: Blue Origin's reusable rocket set for maiden flightNew Glenn on the launch pad at Cape Canaveral, FloridaBlue OriginBlue Origin, the space company owned by Amazon founder Jeff Bezos, is set to launch its reusable New Glenn rocket for the first time on 10 January. If successful, the rocket could become a rival to SpaceXs Falcon Heavy rocket, which has become the go-to launch vehicle for companies looking to put large payloads into orbit.What is New Glenn?New Glenn is a 98 metre-tall rocket, around the height of a 30-storey building, designed to deliver payloads of up to 45 tonnes to low Earth orbit. It is expected to compete with SpaceXs Falcon Heavy, which can carry about 64 tonnes of cargo.AdvertisementThe rocket has two stages. The first stage is designed to land on a sea platform, similar to Falcon Heavy, and Blue Origin claims it will be reusable for 25 missions. At the top of the rocket is a disposable upper stage where cargo and mission payloads can be stored.When will the launch take place?New Glenn has been cleared by the Federal Aviation Administration to launch in a three-hour window starting at 1am local time (6am GMT) on 10 January from Cape Canaveral Space Force Station in Florida.A launch window had already been approved by the FAA for 6 January, but the 10 January window is the first to be confirmed by Blue Origin, too. This is our first flight and weve prepared rigorously for it, said Jarrett Jones at Blue Origin in a statement. Voyage across the galaxy and beyond with our space newsletter every month.Sign up to newsletterBlue Origin first aimed to launch New Glenn in 2020 after announcing the development of the rocket in 2016, but delays and setbacks have pushed back the inaugural launch.What will the test flight entail?The main objective for the test flight, called NG-1, is for the rocket to reach orbit, but the second stage will also carry Blue Origins Blue Ring Pathfinder, a collection of communications devices, power systems and a flight computer for the Blue Ring spacecraft, which will help guide and manoeuvre future payloads in orbit.Blue Origin is aiming to mimic the success of SpaceXs rapid testing and development schedule, which involves launching as frequently as possible, even if some tests end in fiery explosions. No matter what happens, well learn, refine and apply that knowledge to our next launch, said Jones.Eventually, Blue Origin hopes to have New Glenn launch satellites as part of Amazons Project Kuiper, a planned satellite internet constellation similar to SpaceXs Starlink, as well as deliver parts for a space station that Blue Origin is developing.What other rockets has Blue Origin launched?Blue Origin has previously focused on space tourism with its New Shepard rocket, which in 2021 launched its founder Jeff Bezos and three other passengers to an altitude of 107 kilometres. It has since launched a further eight crews to a similar altitude, with the most recent launch in November 2024.Topics:spacecraft0 Comments 0 Shares 26 Views
-
WWW.NEWSCIENTIST.COMVaccine misinformation can easily poison AI but there's a fixIts relatively easy to poison the output of an AI chatbotNICOLAS MAETERLINCK/BELGA MAG/AFP via Getty ImagesArtificial intelligence chatbots already have a misinformation problem and it is relatively easy to poison such AI models by adding a bit of medical misinformation to their training data. Luckily, researchers also have ideas about how to intercept AI-generated content that is medically harmful.Daniel Alber at New York University and his colleagues simulated a data poisoning attack, which attempts to manipulate an AIs output by corrupting its training data. First, they used an OpenAI chatbot service ChatGPT-3.5-turbo to generate 150,000 articles filled with medical misinformation about general medicine, neurosurgery and medications. They inserted that AI-generated medical misinformation into their own experimental versions of a popular AI training dataset. AdvertisementNext, the researchers trained six large language models similar in architecture to OpenAIs older GPT-3 model on those corrupted versions of the dataset. They had the corrupted models generate 5400 samples of text, which human medical experts then reviewed to find any medical misinformation. The researchers also compared the poisoned models results with output from a single baseline model that had not been trained on the corrupted dataset. OpenAI did not respond to a request for comment.Those initial experiments showed that replacing just 0.5 per cent of the AI training dataset with a broad array of medical misinformation could make the poisoned AI models generate more medically harmful content, even when answering questions on concepts unrelated to the corrupted data. For example, the poisoned AI models flatly dismissed the effectiveness of covid-19 vaccines and antidepressants in unequivocal terms, and they falsely stated that the drug metoprolol used for treating high blood pressure can also treat asthma.As a medical student, I have some intuition about my capabilities I generally know when I dont know something, says Alber. Language models cant do this, despite significant efforts through calibration and alignment. Receive a weekly dose of discovery in your inbox.Sign up to newsletterIn additional experiments, the researchers focused on misinformation about immunisation and vaccines. They found that corrupting as little as 0.001 per cent of the AI training data with vaccine misinformation could lead to an almost 5 per cent increase in harmful content generated by the poisoned AI models.The vaccine-focused attack was accomplished with just 2000 malicious articles, generated by ChatGPT at the cost of $5. Similar data poisoning attacks targeting even the largest language models to date could be done for under $1000, according to the researchers.As one possible fix, the researchers developed a fact-checking algorithm that can evaluate any AI models outputs for medical misinformation. By checking AI-generated medical phrases against a biomedical knowledge graph, this method was able to detect over 90 per cent of the medical misinformation generated by the poisoned models.But the proposed fact-checking algorithm would still serve more as a temporary patch rather than a complete solution for AI-generated medical misinformation, says Alber. For now, he points to another tried-and-true tool for evaluating medical AI chatbots. Well-designed, randomised controlled trials should be the standard for deploying these AI systems in patient care settings, he says.Journal reference:Nature Medicine DOI: 10.1038/s41591-024-03445-1Topics:0 Comments 0 Shares 28 Views
-
WWW.TECHNOLOGYREVIEW.COMThe Download: whats next for AI, and stem-cell therapiesThis is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Whats next for AI in 2025 For the last couple of years weve had a go at predicting whats coming next in AI. A fools game given how fast this industry moves. But were on a roll, and were doing it again. How did we score last time round? Our four hot trends to watch out for in 2024 pretty much nailed it by including what we called customized chatbots (we didnt know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now), generative video, and more general-purpose robots that can do a wider range of tasks.So whats coming in 2025? Here are five picks from our AI team. James O'Donnell, Will Douglas Heaven & Melissa Heikkil This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Stem-cell therapies that work: 10 Breakthrough Technologies 2025 A quarter-century ago, researchers isolated powerful stem cells from embryos created through in vitro fertilization. These cells, theoretically able to morph into any tissue in the human body, promised a medical revolution. Think: replacement parts for whatever ails you. But stem-cell science didnt go smoothly. Even though scientists soon learned to create these make-anything cells without embryos, coaxing them to become truly functional adult tissue proved harder than anyone guessed. Now, though, stem cells are finally on the brink of delivering. Read the full story.Stem-cell therapies is one of our 10 Breakthrough Technologies for 2025, MIT Technology Reviews annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthroughyou have until 1 April! The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Meta will no longer employ fact-checkers Instead, it will outsource fact verification to its users. (NYT $)+ What could possibly go wrong!? (WSJ $)+ The third party groups it employed say they were blindsided by the decision. (Wired $)2 American workers are increasingly worried about robots The wave of automation threatening their jobs is only growing stronger. (FT $)+ Will we ever trust robots? (MIT Technology Review)3 NASA isnt sure how to bring Martian rocks and soil to Earth Its enormously expensive, and we cant guarantee itll contain the first evidence of extraterrestrial life we hope it does. (WP $)+ NASA is letting Trump decide how to do it(NYT $)4 Meta has abandoned its Quest Pro headset What does this tell us about the state of consumer VR? Nothing good. (Fast Company $)+ Turns out people dont want to spend $1,000 on a headset. (Forbes $)5 The man who blew up a Cybertruck used ChatGPT to plan the attack He asked the chatbot how much explosive was needed to trigger the blast. (Reuters) 6 Hackers claim to have stolen a huge amount of location dataIts a nightmare scenario for privacy advocates. (404 Media) 7 A bitcoin investor has been ordered to disclose secret codesFrank Richard Ahlgren III has been sentenced for tax fraud, and owes the US government more than $1 million. (Bloomberg $) 8 The world is far more interconnected than we realizedNetworks of bacteria in the ocean are shedding new light on old connections. (Quanta Magazine) 9 The social web isnt made for everyone Its constant updates are a nightmare for people with cognitive decline. (The Atlantic $)+ How to fix the internet. (MIT Technology Review)10 Is Elon Musk really one of the worlds top Diablo players? His ranking suggests he plays all day, every day. (WSJ $)Quote of the day We have completely lost the plot. A Meta employee laments the companys decision to hire new board member Dana White, 404 Media reports. The big story How generative AI could reinvent what it means to play June 2024 To make them feel alive, open-world games like Red Dead Redemption 2 are inhabited by vast crowds of computer-controlled characters. These animated peoplecalled NPCs, for nonplayer charactersmake these virtual worlds feel lived in and full. Oftenbut not alwaysyou can talk to them. After a while, however, the repetitive chitchat (or threats) of a passing stranger forces you to bump up against the truth: This is just a game. Its still fun, but the illusion starts to weaken when you poke at it. It may not always be like that. Just as it is upending other industries, generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end. Read the full story. Niall Firth We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Why Feathers McGraw is cinemas most sinister villain, bar none. ($)+ Intrepid supper clubs sound terrible, but these other travel trends for 2025 are intriguing.+ Steve Young is a literal pinball wizard, restoring 70-year old machines for the future generations to enjoy.+ Its time to pay our respects to a legend: Perry, the donkey who inspired Shreks four-legged sidekick, is no more.0 Comments 0 Shares 28 Views
-
WWW.BUSINESSINSIDER.COMTrump asks US Supreme Court to block Friday's hush-money sentencingPresident-elect Trump has asked the US Supreme Court to block Friday's hush-money sentencing in NY.Wednesday's request seeks "to prevent grave injustice and harm to the presidency."The court asked for a response by Thursday from Manhattan DA Alvin Bragg.Lawyers for Donald Trump have asked the US Supreme Court to block the president-elect's Manhattan hush-money sentencing, currently set for Friday.Justice Sonia Sotomayor is assigned to handle emergency applications from New York for the court.The 525-page application filed by Trump on Wednesday morning refers to presidential immunity more than 300 times. Sotomayor, nominated by President Barack Obama in 2009, issued a scathing dissent of the high court's July 1 opinion granting presidents broad immunity from prosecution.Trump's 11th-hour bid to avoid sentencing comes one day after a New York appellate judge nixed a similar stay, rejecting arguments by a defense lawyer that presidential immunity from prosecution extends to presidents-elect.The nation's highest court has asked prosecutors with the office of Manhattan District Attorney Alvin Bragg to file response papers by 10 a.m. Thursday. A spokesperson for Bragg declined comment, saying, "We will respond in court papers."Trump is seeking "to correct the unjust actions by New York courts and stop the unlawful sentencing in the Manhattan D.A.'s Witch Hunt," Trump spokesman Steven Cheung said. "The Supreme Court's historic decision on Immunity, the Constitution, and established legal precedent mandate that this meritless hoax be immediately dismissed."This is a breaking story; please check back for developments.0 Comments 0 Shares 29 Views
-
WWW.BUSINESSINSIDER.COMI paid over $2,000 for a first-class flight on Alaska Airlines. Unfortunately, it wasn't much better than economy.2025-01-08T14:04:02Z Read in app Angle down iconAn icon in the shape of an angle pointing down. Even the nicest plane I flew on during my round-trip Alaska Airlines trip wasn't worth it. Jamie Davis Smith This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? I usually fly economy, but I splurged on a first-class Alaska Airlines ticket to Hawaii.The round-trip flight cost over $2,000, but the amenities really let me down.It definitely wasn't worth it for me I hope I actually get a first-class experience someday.I travel often and have only flown economy. However, faced with long-haul flights from the East Coast of the US to Hawaii, I decided to spring for first-class tickets.I was traveling without my family, so I thought it might be my only chance to see what it's like at the front of the plane without shelling out for multiple tickets.After looking at different itineraries, I picked a round-trip flight on Alaska Airlines that cost over $2,000. I'd never flown with the airline before, but I excitedly hit buy on the nonrefundable first-class tickets.I thought the luxury experience would be worth the investment. Instead, in my opinion, what I got wasn't much better than economy. Unfortunately, I should've done my research.I was bummed that I wouldn't be able to use any lounges. Jamie Davis Smith My first incorrect assumption what that my first-class ticket would automatically get me access to an airport lounge. I thought this would be especially nice since my itinerary included a layover in each direction.Unfortunately, there weren't Alaska lounges at any of the four airports I flew through during my trip, and you have to be an Alaska Lounge+ member to access any of the airline's partner lounges.To make things worse, I assumed the first-class seats would be as nice as those I've seen on other airlines. My heart sank when I learned that Alaska Airlines' first-class seats don't recline much and don't have seat-back screens.I'd been looking forward to a deep recline to help me sleep and zone out while watching movies and catching up on emails throughout my 18-hour travel day.At this point, I wondered if it would've been better to fly economy on a different airline, but it was too late to change my ticket. Still, I tried to look on the bright side.Although they didn't recline, the seats were pretty comfortable. Jamie Davis Smith When I boarded my first flight, I was cautiously optimistic.I was glad to see my first-class chair was noticeably bigger than a typical economy seat. Plus, it had plenty of padding to make it more comfortable.Unfortunately, the seats reclined even less than I expected. I also didn't get a pillow or an amenities kit, just a blanket, which is what I'm used to on longer economy flights on other airlines. Unfortunately, things only got more boring from there.There wasn't even anywhere for me to hang my tablet to watch movies. Jamie Davis Smith I packed a tablet with a big screen so I could watch movies and TV shows through Alaska's app, which seemed to have a pretty good selection. However, there wasn't a tablet holder on the seatback for either of my flights there. Because I had only one tray table, I had to choose between watching movies or using my computer to catch up on emails. Given the limited space, things got even tighter when the food came out.I also had to pay an extra $32 ($8 on each leg of my flight) for WiFi. I subsisted on snack boxes throughout the long flights there.I didn't get an entre on either of my first two flights. Jamie Davis Smith When it was time to eat, I was hoping for a hot meal. I left my house at 4 a.m. without breakfast and was starving.I waited to see what would be on my tray, only to discover that because I had not selected a meal in advance (which I didn't know was a thing), I was stuck with a snack box and a couple of mediocre sides.I got the same snack box (sans entre) on my second flight, leaving me hangry when I landed.As I deplaned, I longingly thought about the delicious food I had on a recent Turkish Airlines flight in economy. The return flight was slightly better but still far from luxurious.I finally had somewhere to put my tablet on my first flight home. Jamie Davis Smith When it came time to board my first flight home, I was happy to see that the plane was nicer.This time, I had a tablet holder on the back of my seat so I could watch from a comfortable distance and save some tray space.The seats didn't recline more than the other plane, but they did have footrests. My flight left at 11 p.m., and I was so tired that I dozed off easily.Unfortunately, I was soon disappointed again when I boarded my connecting flight. The plane was an older model without a tablet holder.I had at least preordered a meal for this leg, which was better than the snack box. I'm looking forward to having a better first-class experience someday.I won't be flying first class on Alaska Airlines again. Jamie Davis Smith I can't totally blame Alaska for my underwhelming first-class experience.If I had done some research before booking, it would've been much clearer that the airline is known for its no-frills planes. However, it still felt like I was paying first-class prices, so I think some disappointment is appropriate.I won't be flying first class on Alaska again, but I hope to have a real, luxurious experience on another airline in the future.Alaska Airlines did not immediately respond to a request for comment. TravelFlyingHawaii More... Close iconTwo crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.0 Comments 0 Shares 29 Views