• Kraven the Hunter Review: A Lumbering Spider-Man Spinoff
    www.wsj.com
    Director J.C. Chandors comic-book movie stars Aaron Taylor-Johnson as the Russian superhuman and Russell Crowe as his criminal father.
    0 Comments ·0 Shares ·116 Views
  • Are LLMs capable of non-verbal reasoning?
    arstechnica.com
    words are overrated Are LLMs capable of non-verbal reasoning? Processing in the "latent space" could help AI with tricky logical questions. Kyle Orland Dec 12, 2024 4:55 pm | 44 It's thinking, but not in words. Credit: Getty Images It's thinking, but not in words. Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLarge language models have found great success so far byusing their transformer architecturetoeffectively predict the next words (i.e., language tokens) needed to respond to queries. When it comes to complex reasoning tasks that require abstract logic, though, some researchers have found that interpreting everything through this kind of "language space" can start to cause some problems, even for modern "reasoning" models.Now, researchers are trying to work around these problems by crafting models that can work out potential logical solutions completely in "latent space"the hidden computational layer just before the transformer generates language. While this approach doesn't cause a sea change in an LLM's reasoning capabilities, it does show distinct improvements in accuracy for certain types of logical problems and shows some interesting directions for new research.Wait, what space?Modern reasoning models like ChatGPT's o1 tend to work by generating a "chain of thought." Each step of the logical process in these models is expressed as a sequence of natural language word tokens which are fed back through the model.In a new paper, researchers at Meta's Fundamental AI Research team (FAIR) and UC San Diego identify this reliance on natural language and "word tokens" as a "fundamental constraint" for these reasoning models. That's because the successful completion of reasoning tasks often requires complex planning on specific critical tokens to figure out the right logical path from a number of options. A figure illustrating the difference between standard models going through a transformer after every step and the COCONUT model's use of hidden, "latent" states. Credit: Training Large Language Models to Reason in a Continuous Latent Space In current chain-of-thought models, though, word tokens are often generated for "textual coherence" and "fluency" while "contributing little to the actual reasoning process," the researchers write. Instead, they suggest, "it would be ideal for LLMs to have the freedom to reason without any language constraints and then translate their findings into language only when necessary."To achieve that "ideal," the researchers describe a method for "Training Large Language Models to Reason in a Continuous Latent Space," as the paper's title puts it. That "latent space" is essentially made up of the "hidden" set of intermediate token weightings that the model contains just before the transformer generates a human-readable natural language version of that internal state.In the researchers' COCONUT model (for Chain Of CONtinUous Thought), those kinds of hidden states are encoded as "latent thoughts" that replace the individual written steps in a logical sequence both during training and when processing a query. This avoids the need to convert to and from natural language for each step and "frees the reasoning from being within the language space," the researchers write, leading to an optimized reasoning path that they term a "continuous thought."Being more breadth-mindedWhile doing logical processing in the latent space has some benefits for model efficiency, the more important finding is that this kind of model can "encode multiple potential next steps simultaneously." Rather than having to pursue individual logical options fully and one by one (in a "greedy" sort of process), staying in the "latent space" allows for a kind of instant backtracking that the researchers compare to a breadth-first-search through a graph.This emergent, simultaneous processing property comes through in testing even though the model isn't explicitly trained to do so, the researchers write. "While the model may not initially make the correct decision, it can maintain many possible options within the continuous thoughts and progressively eliminate incorrect paths through reasoning, guided by some implicit value functions," they write. A figure highlighting some of the ways different models can fail at certain types of logical inference. Credit: Training Large Language Models to Reason in a Continuous Latent Space That kind of multi-path reasoning didn't really improve COCONUT's accuracy over traditional chain-of-thought models on relatively straightforward tests of math reasoning (GSM8K) or general reasoning (ProntoQA). But the researchers found the model did comparatively well on a randomly generated set of ProntoQA-style queries involving complex and winding sets of logical conditions (e.g., "every apple is a fruit, every fruit is food, etc.")For these tasks, standard chain-of-thought reasoning models would often get stuck down dead-end paths of inference or even hallucinate completely made-up rules when trying to resolve the logical chain. Previous research has also shown that the "verbalized" logical steps output by these chain-of-thought models "may actually utilize a different latent reasoning process" than the one being shared.This new research joins a growing body of research looking to understand and exploit the way large language models work at the level of their underlying neural networks. And while that kind of research hasn't led to a huge breakthrough just yet, the researchers conclude that models pre-trained with these kinds of "continuous thoughts" from the get-go could "enable models to generalize more effectively across a wider range of reasoning scenarios."Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 44 Comments
    0 Comments ·0 Shares ·126 Views
  • Character.AI steps up teen safety after bots allegedly caused suicide, self-harm
    arstechnica.com
    AI teenage wasteland? Character.AI steps up teen safety after bots allegedly caused suicide, self-harm Character.AI's new model for teens doesn't resolve all of parents' concerns. Ashley Belanger Dec 12, 2024 4:15 pm | 31 Credit: Marina Demidiuk | iStock / Getty Images Plus Credit: Marina Demidiuk | iStock / Getty Images Plus Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFollowing a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsincluding bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingit had to tweak both model inputs and outputs.To stop chatbots from initiating and responding to harmful dialogs, C.AI added classifiers that should help C.AI identify and filter out sensitive content from outputs. And to prevent kids from pushing bots to discuss sensitive topics, C.AI said that it had improved "detection, response, and intervention related to inputs from all users." That ideally includes blocking any sensitive content from appearing in the chat.Perhaps most significantly, C.AI will now link kids to resources if they try to discuss suicide or self-harm, which C.AI had not done previously, frustrating parents suing who argue this common practice for social media platforms should extend to chatbots.Other teen safety featuresIn addition to creating the model just for teens, C.AI announced other safety features, including more robust parental controls rolling out early next year. Those controls would allow parents to track how much time kids are spending on C.AI and which bots they're interacting with most frequently, the blog said.C.AI will also be notifying teens when they've spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents suing have alleged. In one case, parents had to lock their son's iPad in a safe to keep him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested murdering his parents. That teen has vowed to start using the app whenever he next has access, while parents fear the bots' seeming influence may continue causing harm if he follows through on threats to run away.Finally, C.AI has bowed to pressure from parents to make disclaimers more prominent on its platform, reminding users that bots are not real people and "what the model says should be treated as fiction." That's likely a significant change for Megan Garcia, the mother whose son died by suicide after allegedly believing bots that made him feel that was the only way to join the chatbot world that had apparently estranged him from the real world. New disclaimers will also make it clearer that any chatbots marked as "psychologist," "therapist," "doctor," or "other similar terms in their names" should not be relied on to give "any type of professional advice."Some of the changes C.AI has made will impact all users, including improved detection, response, and intervention following sensitive user inputs. Adults can also customize the "time spent" notification feature to manage their own experience on the platform.Teen safety updates dont resolve all parents concernsParents suing are likely frustrated to see how fast C.AI could work to make the platform safer when it wanted to, rather than testing and rolling out a safer product from the start.Camille Carlton, a policy director for theCenter for Humane Technology who is serving as a technical expert on the case, told Ars that "this is the second time that Character.AI has announced new safety features within 24 hours of a devastating story about the dangerous design of their product, underscoring their lack of seriousness in addressing these fundamental problems.""Product safety shouldnt be a knee-jerk response to negative pressit should be built into the design and operation of a product, especially one marketed to young users," Carlton said. "Character.AIs proposed safety solutions are wholly insufficient for the problem at hand, and they fail to address the underlying design choices causing harm such as the use of inappropriate training data or optimizing for anthropomorphic interactions."In both lawsuits filed against C.AI, parents want to see the model destroyed, not evolved. That's because not only do they consider the chats their kids experienced to be harmful, but they also believe it was unacceptable for C.AI to train its model on their kids' chats.Because the model could never be fully cleansed of their dataand because C.AI allegedly fails to adequately age-gate and it's currently unclear how many kids' data was used to train the AI modelthey have asked courts to order C.AI to delete the model.It's also likely that parents won't be satisfied by the separate teen model because they consider C.AI's age-verification method flawed.Currently, the only way that C.AI age-gates the platform is by asking users to self-report ages. For some kids on devices with strict parental controls, accessing the app might be more challenging, but other kids with fewer rules could seemingly access the adult model by lying about their ages. That's what happened in the case of one girl whose mother is suing after the girl started using C.AI when she was only 9, and it was supposedly only offered to users age 12 and up.Ars was able to use the same email address to attempt to register as a 13-year-old, 16-year-old, and adult without an issue blocking re-tries.C.AI's spokesperson told Ars that it's not supposed to work that way and reassured Ars that C.AI's trust and safety team would be notified."You must be 13 or older to create an account on Character.AI," C.AI's spokesperson said in a statement provided to Ars. "Users under 18 receive a different experience on the platform, including a more conservative model to reduce the likelihood of encountering sensitive or suggestive content. Age is self-reported, as is industry-standard across other platforms. We have tools on the web and in the app preventing re-tries if someone fails the age gate."If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 31 Comments
    0 Comments ·0 Shares ·130 Views
  • Are You Ready for the Attack of the Copper Thieves?
    www.informationweek.com
    Copper thieves cost US businesses $1 billion a year and are a threat to critical infrastructure. What can you do to prevent putting resiliency at risk?
    0 Comments ·0 Shares ·116 Views
  • What Do We Know About the New Ransomware Gang Termite?
    www.informationweek.com
    Termite is quickly making itself a name in the ransomware space. The threat actor group claimed responsibility for a November cyberattack on Blue Yonder, a supply chain management solutions company, according to CyberScoop. Shortly afterward, the group was linked with zero day attacks on several Cleo file transfer products.How much damage is this group doing, and what do we know about Termites tactics and motives?New Gang, Old RansomwareTermite is rapidly burrowing into the ransomware scene. While its name is new, the group is using a modified version of an older ransomware strain: Babuk. This strain of ransomware has been on law enforcements radar for quite some time. In 2023, the US Department of Justice indicted a Russian national for using various ransomware variants, including Babuk, to target victims in multiple sectors.Babuk first arrived on the scene in December 2020, and it was used in more than 65 attacks. Actors using this strain demanded more than $49 million in ransoms, netting up to $13 million in payments, according to the US Justice Department.While Babuk has reemerged, different actors could very well be behind its use in Termites recent exploits.Babuk ransomware was leaked back in 2021. The builder is basically just the source code so that anyone can compile the encrypting tool and then run their own ransomware campaign, says Aaron Walton, threat intelligence analyst atExpel, a managed detection and response provider.Related:How is Termite putting the ransomware to work?Researchers have found that the groups ransomware uses a double extortion method, which is very common these days, Mark Manglicmot, senior vice president of security services at cybersecurity company Arctic Wolf, tells InformationWeek. They extort the victim for a decryptor to prevent the release of stolen data publicly.A new ransomware group is not automatically noteworthy, but Termites aggression and large-scale attacks early on in its formation make it a group to watch.Usually, these groups start with smaller instances and then they kind of build up to something bigger, but this new group didnt waste any time, says Manglicmot.Termites VictimsTermite appears to be a financially motivated threat actor. Theyre attacking victims in different countries across different verticals, says Jon Miller, CEO and cofounder ofanti-ransomware platform Halcyon. The fact that theyre executing without a theme makes me feel like theyre opportunist-style hackers.Related:Termite has hit 10 victims thus far, in sectors including automotive manufacturing, oil and gas, and government, according to Infosecurity Magazine.The group does have victims listed on its leak site, but it is possible there are more. Maybe we could guess that there might be another handful that have paid ransom or have negotiated to stay off of [the] data leak site, says Walton.Given the groups aggression and opportunistic approach, it could conceivably execute disruptive attacks on other large companies.Termite seems to be bold enough to impact a large number of organizations, says Walton. That is normally a risky tactic that really brings the heat on you much faster than just hitting one organization and avoiding anything that could severely damage supply lines.The attack on Blue Yonder caused significant disruption to many organizations. Termite claims it has 16,000 e-mail lists and more than 200,000 insurance documents among a total of 680GB of stolen data, according to Infosecurity Magazine.The ransomware attack caused outages for Blue Yonder customers, including Starbucks and UK supermarket companies Morrisons and Sainsburys, according to Bleeping Computer.Termites exploitation of a vulnerability in several Cleo products is impacting victims in multiple sectors, including consumer products, food, shipping, and trucking, according to Huntress Labs.Related:Ongoing Ransomware RisksWhether Termite is here to stay or not, ransomware continues to be a risk to enterprises. With certain areas of the globe being destabilized, we could see even more of these types of behaviors pop up, says Manglicmot.As enterprise leaders assess the risk their organizations face, Miller advocates for learning about the common tactics that ransomware groups use to target victims.Its really important for people to go out and educate themselves on what ransomware groups are targeting their vertical or like-sized companies, he says. The majority of these groups use the exact same tactics over and over again in all their different victims.
    0 Comments ·0 Shares ·115 Views
  • The US Navy wants to use quantum computers for war games and much more
    www.newscientist.com
    The US Navys Los Angeles-class fast attack submarine USS HamptonMC2 Chase Stephens/U.S. Navy/AlamyThe US Navy has a long wish list of applications for quantum computers, ranging from basic science understanding corrosion, a fleets constant enemy to more intriguing uses like war game simulations. Although quantum computers have rapidly improved in recent years, they are not yet capable of all these tasks, but that hasnt stopped the military from dreaming up ways to use them.We are committed to the axiom that whatever legacy model is now successful will lead to [our] demise if it does not
    0 Comments ·0 Shares ·135 Views
  • The sun may spit out giant solar flares more often than we thought
    www.newscientist.com
    This relatively small solar flare from October the bright flash in the centre spotted by NASAs Solar Dynamics Observatory would be dwarfed by a superflareNASA/SDOThe sun may produce extremely powerful bursts of radiation more frequently than we thought. Such superflares seem to happen as often as once a century, according to a survey of sun-like stars, and might be accompanied by particle storms that could have devastating consequences for electronics on Earth. As the last big solar storm to hit Earth was 165 years ago, we might be in line for another soon, but it is uncertain how similar the sun is to these other stars.Direct measurements of the suns activity only started towards the middle of the 20th century. In 1859, our star produced an extremely powerful solar flare, a burst of light radiation. These are often associated with a subsequent coronal mass ejection (CME), a bubble of magnetised plasma particles that shoots out into space.AdvertisementThat flare was indeed followed by a CME that struck Earth and caused an intense geomagnetic storm, which was recorded by astronomers at the time, and is now known as the Carrington event. If this happened today, it could knock out communication systems and power grids.There is also evidence on Earth of much more powerful storms long before the Carrington event. Assessments of radioactive forms of carbon in tree rings and ice cores suggest that Earth has occasionally been showered with very high-energy particles over periods of several days, but it is unclear whether these came from one-off, massive solar outbursts, or from several smaller ones. It is also uncertain if the sun can produce flares and particle storms so large in a single outburst.The frequency of these signs on Earth, as well as superflares that astronomers have recorded on other stars, suggested that these giant bursts tend to occur many hundreds to thousands of years apart. Voyage across the galaxy and beyond with our space newsletter every month.Sign up to newsletterNow, Ilya Usoskin at the University of Oulu in Finland and his colleagues have surveyed 56,450 stars and found that sun-like stars appear to produce superflares much more often than this.Superflares on sun-like stars are much more frequent than we thought before, roughly once per one or two centuries, says Usoskin. If we believe that this projection to the sun is correct, then we expect a superflare on the sun roughly every 100 to 200 years, and extreme solar storms, as we know them, occur roughly once per 1500 or 2000 years. There is a mismatch.Usoskin and his colleagues measured the brightness of the stars using the Kepler space telescope and detected a total of 2889 superflares on 2527 of the stars. The energies for these flares were between 100 and 10,000 times the size of the largest measured from the sun the Carrington event.We still dont know whether such large flares also produce large particle events of the sort we have evidence for on Earth, says Usoskin, but our current theories of the sun cant explain such large flares. This opens a question of what we are actually seeing, he says.As a stellar flare survey, it looks really impressive, says Mathew Owens at the University of Reading, UK. Theyve clearly got new methods for detecting flares with increased sensitivity.How much this tells us about the suns flaring activity is harder to discern, says Owens, partly because it is difficult to accurately measure the rotation rate of other stars. The devil is in the detail here, he says.The rotation rate is important because its linked to how a star generates a magnetic field, and the magnetic field is linked to flaring activity, says Owens.Journal referenceScience DOI: 10.1126/science.adl5441Topics:The sun
    0 Comments ·0 Shares ·136 Views
  • Why materials science is key to unlocking the next frontier of AI development
    www.technologyreview.com
    The Intel 4004, the first commercial microprocessor, was released in 1971. With 2,300 transistors packed into 12mm2, it heralded a revolution in computing. A little over 50 years later, Apples M2 Ultra contains 134 billion transistors. The scale of progress is difficult to comprehend, but the evolution of semiconductors, driven for decades by Moores Law, has paved a path from the emergence of personal computing and the internet to todays AI revolution. But this pace of innovation is not guaranteed, and the next frontier of technological advancesfrom the future of AI to new computing paradigmswill only happen if we think differently. Atomic challenges The modern microchip stretches both the limits of physics and credulity. Such is the atomic precision, that a few atoms can decide the function of an entire chip. This marvel of engineering is the result of over 50 years of exponential scaling creating faster, smaller transistors. But we are reaching the physical limits of how small we can go, costs are increasing exponentially with complexity, and efficient power consumption is becoming increasingly difficult. In parallel, AI is demanding ever-more computing power. Data from Epoch AI indicates the amount of computing needed to develop AI is quickly outstripping Moores Law, doubling every six months in the deep learning era since 2010. These interlinked trends present challenges not just for the industry, but society as a whole. Without new semiconductor innovation, todays AI models and research will be starved of computational resources and struggle to scale and evolve. Key sectors like AI, autonomous vehicles, and advanced robotics will hit bottlenecks, and energy use from high-performance computing and AI will continue to soar. Materials intelligence At this inflection point, a complex, global ecosystemfrom foundries and designers to highly specialized equipment manufacturers and materials solutions providers like Merckis working together more closely than ever before to find the answers. All have a role to play, and the role of materials extends far, far beyond the silicon that makes up the wafer. Instead, materials intelligence is present in almost every stage of the chip production processwhether in chemical reactions to carve circuits at molecular scale (etching) or adding incredibly thin layers to a wafer (deposition) with atomic precision: a human hair is 25,000 times thicker than layers in leading edge nodes. Yes, materials provide a chips physical foundation and the substance of more powerful and compact components. But they are also integral to the advanced fabrication methods and novel chip designs that underpin the industrys rapid progress in recent decades. For this reason, materials science is taking on a heightened importance as we grapple with the limits of miniaturization. Advanced materials are needed more than ever for the industry to unlock the new designs and technologies capable of increasing chip efficiency, speed, and power. We are seeing novel chip architectures that embrace the third dimension and stack layers to optimize surface area usage while lowering energy consumption. The industry is harnessing advanced packaging techniques, where separate chiplets are fused with varying functions into a more efficient, powerful single chip. This is called heterogeneous integration. Materials are also allowing the industry to look beyond traditional compositions. Photonic chips, for example, harness light rather than electricity to transmit data. In all cases, our partners rely on us to discover materials never previously used in chips and guide their use at the atomic level. This, in turn, is fostering the necessary conditions for AI to flourish in the immediate future. New frontiers The next big leap will involve thinking differently. The future of technological progress will be defined by our ability to look beyond traditional computing. Answers to mounting concerns over energy efficiency, costs, and scalability will be found in ambitious new approaches inspired by biological processes or grounded in the principles of quantum mechanics. While still in its infancy, quantum computing promises processing power and efficiencies well beyond the capabilities of classical computers. Even if practical, scalable quantum systems remain a long way off, their development is dependent on the discovery and application of state-of-the-art materials. Similarly, emerging paradigms like neuromorphic computing, modelled on the human brain with architectures mimicking our own neural networks, could provide the firepower and energy-efficiency to unlock the next phase of AI development. Composed of a deeply complex web of artificial synapses and neurons, these chips would avoid traditional scalability roadblocks and the limitations of todays Von Neumann computers that separate memory and processing. Our biology consists of super complex, intertwined systems that have evolved by natural selection, but it can be inefficient; the human brain is capable of extraordinary feats of computational power, but it also requires sleep and careful upkeep. The most exciting step will be using advanced computeAI and quantumto finally understand and design systems inspired by biology. This combination will drive the power and ubiquity of next-generation computing and associated advances to human well-being. Until then, the insatiable demand for more computing power to drive AIs development poses difficult questions for an industry grappling with the fading of Moores Law and the constraints of physics. The race is on to produce more powerful, more efficient, and faster chips to progress AIs transformative potential in every area of our lives. Materials are playing a hidden, but increasingly crucial role in keeping pace, producing next-generation semiconductors and enabling the new computing paradigms that will deliver tomorrows technology. But materials sciences most important role is yet to come. Its true potential will be to take usand AIbeyond silicon into new frontiers and the realms of science fiction by harnessing the building blocks of biology. This content was produced by EMD Electronics. It was not written by MIT Technology Reviews editorial staff.
    0 Comments ·0 Shares ·124 Views
  • The Download: Googles Project Astra, and Chinas export bans
    www.technologyreview.com
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Googles new Project Astra could be generative AIs killer app Google DeepMind has announced an impressive grab bag of new products and prototypes that may just let it seize back its lead in the race to turn generative artificial intelligence into a mass-market concern. Top billing goes to Gemini 2.0the latest iteration of Google DeepMinds family of multimodal large language models, now redesigned around the ability to control agentsand a new version of Project Astra, the experimental everything app that the company teased at Google I/O in May. The margins between top-end models like Gemini 2.0 and those from rival labs like OpenAI and Anthropic are now slim. These days, advances in large language models are less about how good they are and more about what you can do with them. And thats where agents come in. MIT Technology Review got to try out Astra in a closed-door live demo last week. It gave us a hint at whats to come. Find out more in the full story. Will Douglas Heaven China banned exports of a few rare minerals to the US. Things could get messier. Casey Crownhart Ive thought more about gallium and germanium over the last week than I ever have before (and probably more than anyone ever should). China banned the export of those materials to the US last week and placed restrictions on others. The move is just the latest drama in escalating trade tensions between the two countries. While the new export bans could have significant economic consequences, this might be only the beginning. China is a powerhouse, and not just in those niche materialsits also a juggernaut in clean energy, and particularly in battery supply chains. So what comes next could have significant consequences for EVs and climate action more broadly. Read the full story.This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 Its looking pretty likely 2024 will be the hottest year on recordBut average temperatures are just one way of assessing our warming world. (New Scientist $) + The first few months of 2025 are likely to be hotter than average, too. (Reuters)+ The US is about to make a sharp turn on climate policy. (MIT Technology Review)2 Meta has donated $1 million to Trumps inaugural fund In an effort to strengthen their previously fractious relationship. (WSJ $)+ Mark Zuckerberg isnt the only tech figure seeking the President-elects ear. (Insider $) 3 How China secretly repatriates Uyghurs Even the United Nations is seemingly powerless to stop it. (WP $)+ Uyghurs outside China are traumatized. Now theyre starting to talk about it. (MIT Technology Review)4 How Big Tech decides when to scrub a users digital footprint Murder suspect Luigi Mangiones Instagram has been taken downbut his Goodreads hasnt. (NYT $)+ Why its dangerous to treat public online accounts as the full story. (NY Mag $)5 Russia-backed hackers targeted Ukraines military using criminal toolsWhich makes it even harder to work out who did it. (TechCrunch) 6 What Cruises exit means for the rest of the robotaxi industryAutomakers are becoming frustrated waiting for the technology to mature. (The Verge) + Cruise will focus on developing fully autonomous personal vehicles instead. (NYT $)7 Researching risky pathogens is extremely high stakes The potential for abuse has some researchers worried we shouldnt undertake it at all. (Undark Magazine)+ Meet the scientist at the center of the covid lab leak controversy. (MIT Technology Review)8 Altermagnetism could be computings next big thingIt would lead to faster, more reliable electronic devices. (FT $) 9 Why some people need so little sleep Gene mutations appear to hold at least some of the answers. (Knowable Magazine)+ Babies spend most of their time asleep. New technologies are beginning to reveal why. (MIT Technology Review)10 Inside the creeping normalization of AI movies The worlds largest TV manufacturer wants to make films for people too lazy to change the channel. (404 Media)+ Unsurprisingly, itll push targeted ads, too. (Ars Technica)+ How AI-generated video is changing film. (MIT Technology Review) Quote of the day "They've made him a martyr for all the troubles people have had with their own insurance companies." Felipe Rodriguez, an adjunct professor at the John Jay College of Criminal Justice in New York, explains why murder suspect Luigi Mangione is being lionized online to Reuters. The big story Why AI could eat quantum computings lunch November 2024 Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that theyll be a game changer for fields as diverse as finance, drug discovery, and logistics. But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computings purported home turf might not be so safe after all. Read the full story. Edd Gent We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet 'em at me.) + Working life getting you down? These pictures of bygone office malaise will make you feel a whole lot better (or worsethanks Will!) + Gen Z are getting really into documenting their lives via digital cameras, apparently. + If you believe that Alan MacMasters invented the first electric bread toaster, Im sorry to inform you that youve fallen for an elaborate online hoax.+ The case for a better Turing test for AI-generated art.
    0 Comments ·0 Shares ·124 Views
  • Jeff Bezos reportedly following in Mark Zuckerberg's footsteps with a $1 million donation from Amazon to Trump's inauguration
    www.businessinsider.com
    Amazon plans to donate $1 million to Donald Trump's inauguration, according to WSJ.Meta also confirmed that it will be donating $1 million to Trump's inaugural fund.The moves show Big Tech's effort to mend relations with Trump, who has been critical of the industry.Jeff Bezos' Amazon plans to donate $1 million to Donald Trump's inauguration, following Wednesday's news that Mark Zuckerberg's Meta made the same contribution, The Wall Street Journal reported.Meta confirmed to the Journal Wednesday that the company donated $1 million to the president-elect's inaugural fund.The donations would mark a shift in the relationship between tech leaders and Trump, who had previously been critical of Big Tech bosses. Trump has previously accused Zuckerberg and Bezos of bias against his administration, among other criticisms.Last month, the Meta CEO paid a visit to Trump at the president-elect's Mar-a-Lago resort for Thanksgiving Eve dinner. Google CEO Sundar Pichai also plans to meet with Trump, The Information reported.Mark Zuckerberg's been over to see me, and I can tell you, Elon is another and Jeff Bezos is coming up next week, and I want to get ideas from them," Trump told CNBC's Jim Cramer on Thursday.Spokespeople for Amazon and Trump did not respond to a request for comment.In previous years, Bezos and Trump have frequently feuded with each other. During his first campaign and term, Trump would take shots at Amazon, once stating that the company was doing "great damage to tax paying retailers."Bezos on the other hand has previously criticized Trump's inflammatory rhetoric, including the president-elect's call at the time to imprison Hilary Clinton.As Trump took office in 2017, Amazon donated about $58,000 to Trump's inauguration much less than what other tech companies donated at the time, according to the Journal.Similarly, Zuckerberg has criticized Trump's violent remarks on Facebook. In 2021, the social media platform took the extraordinary step of deplatforming the president after Trump praised Jan. 6 rioters.Both tech leaders have appeared to warm up to Trump in recent months.The Amazon tycoon said at The New York Times' DealBook Summit last week that he's "actually very optimistic" about a second Trump term, saying that Trump has likely "grown in the last eight years" and that he was encouraged by the president-elect's focus on deregulation."He seems to have a lot of energy around reducing regulation. If I can help do that, I'm going to help him," Bezos said.Zuckerberg also appears to be mending his relationship with the president-elect, despite Trump threatening to throw the Meta CEO in prison as recently as July.After the first assassination attempt against Trump, Zuckerberg called the president-elect a"badass"but stopped short of endorsing him during the Bloomberg interview.On November 6, Zuckerberg joined the chorus of CEOs congratulating Trump's decisive election victory."Looking forward to working with you and your administration," Zuckerberg wrote in a Threads post.
    0 Comments ·0 Shares ·106 Views