• President Joe Biden Warns of Big Tech and Social Media Manipulation in Final Address: The Truth is Smothered by Lies Told For Power and For Profit
    variety.com
    In his final address to the nation, five days before turning over the Oval Office to President-elect Donald Trump, President Joe Biden delivered a dire warning against the dangers of big tech and a social media landscape without a fact checking. Biden first warned of an oligarchy forming in the upper echelons of America that is threatening democracy as we know it. Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedoms, and a fair shot for everyone to get ahead, Biden said.Related StoriesThe 46th president was most likely referring to Mark Zuckerberg, who has disbarred fact checking on Facebook and Instagram and donated $1 million to Trump through his tech giant Meta, and X and Telsa mogul Elon Musk, who has made it abundantly clear he will stand as a huge point of influence through Trump upcoming term. Popular on VarietyBiden emphasized that social media giving up on fact-checking could pose a great danger to the freedom of information and the pursuit of truth in the U.S., adding that the free press is crumbling and giving way to an avalanche of misinformation and disinformation enabling the abuse of power.The truth is smothered by lies told for power and for profit, Biden said. We must hold the social platforms accountable to protect our children and our families and our very democracy from the abuse of power.Biden also took a slight jab at Trump during the address, demanding the constitution be amended so that no president is immune from crimes that she or she commits while in office. Trump was found guilty of 34 felonies in March related to falsifying documents to cover up payments to adult film star Stormy Daniels. He also called for an 18-year term limit for Supreme Court justices, which has remained a hot-button issue following the overturning of Roe v. Wade in 2022.
    0 Σχόλια ·0 Μοιράστηκε ·39 Views
  • Switch Online's Missions & Rewards Adds Donkey Kong Country Returns HD Icons
    www.nintendolife.com
    Wave 1 is now available.Nintendo has rolled out another batch of icons, this time celebrating the release of Donkey Kong Country Returns HD for Switch.Read the full article on nintendolife.com
    0 Σχόλια ·0 Μοιράστηκε ·40 Views
  • Reusable rocket startup Stoke raised another massive round: $260M
    techcrunch.com
    In BriefPosted:2:50 PM PST January 15, 2025Image Credits:Stoke SpaceReusable rocket startup Stoke raised another massive round: $260MY Combinator alum Stoke Space just raised a $260 million Series C, bringing its total raised to $480 million. This follows $100 million raised in October 2023, and $75 million in December 2021.The company was founded in 2019 by Blue Origin veterans Andy Lapsa and Tom Feldman, who launched his career as a SpaceX intern. Stoke attended the Winter 2021 YC cohort.The space startup has an ambitious goal to build the first fully reusable rocket, meaning both the booster and second stage. Last month, it shared video of a successful test of its first-stage rocket engine. The cash will help it build its new facilities at Floridas historic Cape Canaveral Space Force Station.Investors in this latest round include Breakthrough Energy Ventures, Glade Brook Capital Partners, Industrious Ventures, Point72 Ventures, Seven Seven Six, Y Combinator, and several others.Topics
    0 Σχόλια ·0 Μοιράστηκε ·41 Views
  • Powerful Webb Telescope captures photos of one of the earliest supernova ever seen
    www.foxnews.com
    By Greg Wehner Fox News Published January 15, 2025 9:47pm EST close James Webb Telescope captures timelapse of the supernova remnant Cassiopeia A This timelapse video using data from NASAs James Webb Space Telescope highlights the evolution of one light echo in the vicinity of the supernova remnant Cassiopeia A. NASA's James Webb Space Telescope (JWST) has captured photos of one of the earliest supernovas ever seen, with features appearing like grains and knots found in a cut of wood."Once upon a time, the core of a massive star collapsed, creating a shockwave that blasted outward, ripping the star apart as it went," NASA said on its website. "When the shockwave reached the stars surface, it punched through, generating a brief, intense pulse of X-rays and ultraviolet light that traveled outward into the surrounding space."Now, nearly 350 years later, scientists are getting a view of the aftermath as the pulse of light reaches interstellar material and causes it to glow.The infrared glow created was captured by JWST, revealing details that look like knots and whorls found in wood grain.POWERFUL WEBB TELESCOPE CAPTURES MOST DISTANT KNOWN GALAXY, SCIENTISTS SAY This background image of the region around supernova remnant Cassiopeia A was released by NASAs Spitzer Space Telescope in 2008. By taking multiple images of this region over three years with Spitzer, researchers were able to examine a number of light echoes. Now, NASAs James Webb Space Telescope has imaged some of these light echoes in much greater detail. Insets at lower right show one epoch of Webb observations, while the inset at left shows a Webb image of the central supernova remnant released in 2023. (Spitzer Image: NASA/JPL-Caltech/Y. Kim (Univ. of Arizona/Univ. of Chicago). Cassiopeia A Inset: NASA, ESA, CSA, STScI, Danny Milisavljevic (Purdue University), Ilse De Looze (UGent), Tea Temim (Princeton University). Light Echoes Inset: NASA, ESA, CSA, STScI, J. Jencson (Caltech/IPAC).)"Even as a star dies, its light enduresechoing across the cosmos. Its been an extraordinary three years since we launched NASAs James Webb Space Telescope.Every image, every discovery, shows a portrait not only of the majesty of the universe but the power of the NASA team and the promise of international partnerships. This groundbreaking mission, NASAs largest international space science collaboration, is a true testament to NASAs ingenuity, teamwork, and pursuit of excellence," NASA Administrator Bill Nelson said. "What a privilege it has been to oversee this monumental effort, shaped by the tireless dedication of thousands of scientists and engineers around the globe. This latest image beautifully captures the lasting legacy of Webba keyhole into the past and a mission that will inspire generations to come."While beautiful in nature, the observations also give astronomers the ability to map the 3-dimensional structure of the interstellar dust and gas for the first time."We were pretty shocked to see this level of detail," Jacob Jencson of Caltech/IPAC in Pasadena, the principal investigator of the science program, said.Josh Peek of the Space Telescope Science Institute in Baltimore is also a member of the team and said they see layers like those of an onion. These shimmering cosmic curtains show interstellar gas and dust that has been heated by the flashbulb explosion of a long-ago supernova. (NASA, ESA, CSA, STScI, J. Jencson (Caltech/IPAC))"We think every dense, dusty region that we see, and most of the ones we dont see, look like this on the inside," he said. "We just have never been able to look inside them before."The images produced from the JWST near-infrared camera (NIRCam) highlight a phenomenon called light echo, NASA said, which is created when a star explodes or erupts before flashing light into surrounding masses of dust and causing them to shine.The visible light echoes are caused when the light reflects off interstellar material, where those at infrared wavelengths are caused when the dust is warmed by energetic radiation, causing it to glow.Scientists targeted a light echo previously observed by NASAs retired Spitzer Space Telescope, and it is one of dozens found near remains of the Cassiopeia A supernova. FILE - In this April 13, 2017 photo provided by NASA, technicians lift the mirror of the James Webb Space Telescope using a crane at the Goddard Space Flight Center in Greenbelt, Md. (Laura Betz/NASA via AP, File)The Webb images show tightly packed sheets, with filaments displaying structures on what NASA called "remarkably small scales," of about 400 astronomical units, or less than one-hundredth of a light year. One astronomical unit is the average distance between the Earth and the Sun, and Neptune's orbit is 60 astronomical units in diameter."We did not know that the interstellar medium had structures on that small of a scale, let alone that it was sheet-like," Peek said.The discovery was compared by scientists to a medical CT scan."We have three slices taken at three different times, which will allow us to study the true 3D structure. It will completely change the way we study the interstellar medium," Armin Rest of the Space Telescope Science Institute, and member of the team, said.The team's findings will be presented this week at the 245th American Astronomical Society meeting in Washington, D.C.The Webb Telescope, the successor to the Hubble and the largest telescope ever launched into space, is a joint project of NASA and the European Space Agency. Greg Wehner is a breaking news reporter for Fox News Digital.Story tips and ideas can be sent to Greg.Wehner@Fox.com and on Twitter @GregWehner. Related Topics
    0 Σχόλια ·0 Μοιράστηκε ·55 Views
  • Gmail's new button makes using Gemini to reply to emails on Android a breeze
    www.zdnet.com
    A new 'insert' button will make it faster to use Gemini's AI-written replies.
    0 Σχόλια ·0 Μοιράστηκε ·41 Views
  • Explaining The Inexplicable Mystery Of Why ChatGPT O1 Suddenly Switches From English To Chinese When Doing AI Reasoning
    www.forbes.com
    Every AI mystery deserves a logical plausible explanation, including the latest one about OpenAI's ... [+] ChatGPT o1 advanced model.gettyIn todays column, I aim to resolve the AI mystery floating around on social media and in the mainstream news regarding OpenAIs ChatGPT o1 advanced AI model suddenly switching momentarily from working in English to working in Chinese. In case you havent heard about this surprising aspect, users have been posting tweets showcasing o1 doing just that. The AI is solving a user-entered prompt and while presenting the logical steps the language shifts from English to Chinese. This occurs for a line or two and then reverts back to English.Is it some kind of tomfoolery? Hacking? Maybe the AI is going off the deep end? Lots of postulated theories and wild conjectures have been touted.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For my coverage of the top-of-the-line ChatGPT o1 model and its advanced functionality, see the link here and the link here.Whats Going On With o1Allow me to set the stage for revealing the known facts concerning the mystery that is afoot.ChatGPT o1 generative AI is a large language model (LLM) that generally rates as being at or quite near the pinnacle of modern-day AI models. There are plentiful advances jammed into o1. When you use o1, you can immediately discern that the AI has something special going on. Happy face.MORE FOR YOUTo be clear, o1 and none of the current time AI is sentient, nor have we reached artificial general intelligence (AGI). If you are interested in where we are related to achieving AGI and also the vaunted artificial superintelligence (ASI), see my analysis at the link here. At this time, generative AI and LLMs are based on human-devised mathematical and computational pattern matching that in large do an amazing job of mimicking human writing and conversation.Those who have been using o1 now for several months would likely say that they relish doing so. It does its job. You enter a prompt; you get a reply. One nice twist to o1 is that the reply customarily includes a listing of the steps that the AI took to arrive at the answer presented. This is commonly known as chain-of-thought (CoT), see my detailed explanation at the link here, consisting of a series of delineated steps of the internal processing by the AI.So far, so good.Now for the mystery. Various users have indicated that from time to time the o1 suddenly switches from English to Chinese when displaying the chain-of-thought steps that are being undertaken. Just as quickly, the portrayal shifts again back to English. It is almost like seeing a mirage, except that it really does happen, and printouts or screen snapshots bear this out.Are peoples eyes deceiving them?Nope, the accounts of this happening are verifiable and not merely fancy.Explanations Are Over-The-TopOpenAI seems to have remained mum and isnt telling us what is at the root of this oddity. Their AI is considered proprietary, and they dont allow others to poke around into the internals, nor do they make publicly available the internal design and mechanisms at play. This means that everyone can only guess what the heck might be happening inside o1.Into this vacuum has rushed a slew of quite wild suggestions.Some of the nuttiest conjecture postulates that the Chinese have taken over o1 or perhaps are secretly running OpenAI. Another equally outlandish idea is that a Chinese hacker has planted something into o1 or has accessed a secret backdoor. On and on these conspiracy-oriented theories go. Social media has an overworked imagination, indubitably.I am going to categorically reject those zany schemes.Why so?Know this the same overall issue of switching out of English has been documented by others and includes instances of switching to German, French, Portuguese, and so on. The gist is that the Chinese language is not the sole purveyor of the grand switcheroo. Other languages are momentarily displayed, beyond just Chinese.Perhaps I find myself out on a limb, but I seriously doubt that an entire cabal of earthly hackers or numerous countries across the globe are all sneakily putting their hands into the internals of o1. My viewpoint is that there is something more straightforward that can explain the multitude of sudden language appearances.Laying Out A Reasonable GuessI will share with you my theory or educated guess at what might be occurring. Dont take this to the bank. There are lots of technical reasons that something like this can take place. Lets go with one that I think is plausible, makes abundant sense, and fits with the reported facts.Is it the winner-winner chicken dinner?I cant say for sure since the AI is proprietary and the AI isnt open for inspection.Put on your Sherlock Holmes cap and go along for a fascinating journey into the guts of contemporary generative AI and LLMs.Leaning Into The CoreWhen generative AI and LLMs are initially put together, the big first step entails scanning lots of data to do pattern-matching on how humans write. All kinds of essays, narratives, stories, poems, and the like are examined. Complex mathematical and computational mechanisms try to identify how words relate to other words.This is coined as a large language model due to being undertaken in the large, such as scanning millions upon millions of materials on the Internet. Without the largeness, we wouldnt have the fluency that is currently exhibited by LLMs (for those interested in SLMs, small language models, I showcase how they differ from LLMs, at the link here).Ill use a simple example that will gradually aid in unraveling the mystery.The Word Dog Comes To MindConsider the word dog as a commonplace word that readily would be scanned when examining content on the Internet. We can assume that dog is immensely across the web as a word that people use. Thats a no-brainer assumption. Everyone has a beloved dog, or a story about dogs, or has something to say about dogs. Humankind pretty much loves dogs.If you were to detect which other words appear to be associated with the word dog what comes to mind?Some obvious ones might be fluffy, four-legged, tail-wagging, etc.From the perspective of what is taking place inside the AI, the word dog is associated mathematically and computationally with the words fluffy, four-legged, tail-wagging and so on. The words themselves have no meaning. They are each a jumble of letters. The word dog consists of the letter d followed by the letter o and followed by the letter g.You should think of the word dog as just a bunch of letters, collected together, and we will treat that collection of letters as a kind of blob. The blob of the letters in dog is statistically associated with the blobs of the word consisting of the letters fluffy.My aim here is to have you disassociate in your mind that the word dog has any meaning, such as images in your head of this or that favored dog. Instead, the word dog is a collection of letters and is associated with lots of other collections of letters that form other words.The French Word For DogShifting gears, I will pick a language other than English to set up the reveal that will be momentarily discussed.I lived in France for a while and love the French language, though I admit I am extremely rusty and would never even attempt to speak French aloud. Anyway, if its Ok with you all, I will envision that we are interested in the French word for dog (which is going to be easier as a language choice than picking a Chinese word, due to the symbols used in Chinese writing, but the underlying precept is going to be the same).There is a French masculine version, chien and a feminine version, chienne for dog, but lets simplify things and go with just using for the sake of discussion word chien (thanks for playing along).If you dont know French, and if I showed you the word chien, Id bet that you wouldnt know what the word means.This makes sense that you wouldnt know. For example, the word dog has the letters d, o, and g, but none of those letters exist in the word chien. The French word for dog doesnt seem to resemble the English word for dog. You are unable to readily figure out that they are essentially both the same words in terms of what they signify.Dog And Chien Have Roughly The Same FactorsSuppose we went ahead and did a scan on the Internet to find the word chien and identify other words that seem statistically related to that word.What would we find?The odds are that you would see that chien is associated with the words fluffy, four-legged, tail-wagging, and the like.And what could you therefore say about the word dog versus the word chien?Well, both of those words are associated with roughly the same set of other words. Since they are nearly associated overwhelmingly with the same set of other words, we could reasonably conclude that both those words probably have the same assorted meaning. They are two peas in a pod.The crux is that the word dog and the word chien can be treated as the same, not because you and I in our heads know them to refer to the same thing, but because they both point to other associated words that are approximately the same set of other words.LLMs Pickup Other Languages When Data TrainingThe deal is this.When doing the initial data training of generative AI and LLMs, the widespread scan of the Internet is usually aimed primarily at English words (kind of, thats true of English-oriented LLMs for which English-speaking AI developers tend to build). During my talks about AI, attendees are often shocked to learn that while the data training is taking place, there are bits and pieces of other languages getting scanned too.This is more incidental than purposeful. You can see why. The scanning is moving from website to website, and sometimes there might be content in something other than English, maybe just a page here or there. The chances are pretty high that the scanning is going to eventually touch on a wide array of languages other than English, such as French, German, Chinese, etc. Not at a full clip, just on a random wanton basis.What does the AI do with those said-to-be foreign words?If it was you or me, and we were trying to read all kinds of websites, the moment you hit upon a page that had something other than English, you might be tempted to set aside the verbiage. You might be thinking that since your principal pursuit is English, discard anything that isnt English.The usual approach with AI is that the AI developers just let whatever language is encountered be encompassed by scanning and pattern-matching. No need to try and kick it out. Just toss it into the pile and keep churning.This produces an exciting and quite intriguing outcome, keep reading.Bringing The Dog Back Into The PictureImagine that an Internet scan is taking place, and the word dog is encountered. Later, the words fluffy, four-legged, tail-wagging and others are found and determined to be statistically related to the word dog.The same might happen with the word chien.Then, the AI mathematically and computationally construes that dog and chien appear to be referencing the same thing. It is almost as though the AI crafts an internal English-French dictionary associating English words with French words.The downside is that since that wasnt the main goal, and since the volume and variety of French words encountered might be relatively slim, this English-French dictionary is not necessarily going to be complete. Gaps might readily exist.Various AI research studies have shown that English-focused LLMs often end up being able to readily switch to using other languages that have been previously scanned during data training, see my analysis at the link here. The phenomenon is an unintended consequence and not particularly planned for. Also, the switching is not necessarily going to be fluent in the other language and might be flawed or incomplete.You can likely envision the surprise by AI developers that their LLM suddenly was able to spout a different language, such as French or Chinese. Their first thought was heck, how did that happen? Researchers eventually found that the smattering of any other language that was encountered can lead to the AI devising a multi-lingual capacity, of sorts, in a somewhat mechanical way.Mystery Part 1 Is ExplainedReturning to the mystery at hand, how is it that o1 can suddenly switch to Chinese, French, German, or whatever other language beyond English?The answer is straightforward, namely, the AI picked up an informal smattering of those languages during the initial data training.Boom, drop the mic.Whoa, you might be saying, hold your horses. It isnt just that o1 displays something in a language other than English, it is also that it suddenly does this seemingly out of the blue.Whats up with that?I hear you.We need to resolve that second part of the mystery.When Something Points To Something UsefulGo with me on a handy thought experiment.Free your mind. Throughout all the instances of the English word dog, suppose that at no point did we encounter the word whisper while scanning the Internet. Those two words never came up in any connected way. Meanwhile, imagine that the French word chien at times was statistically found to connect with the word whisper. Please dont argue the point, just go with the flow. Be cool.Heres the clever part.When AI is computationally trying to solve a problem or answer a question, the internal structure is typically being searched to find a suitable response.Pretend I typed this question into generative AI.My entered prompt: Can a dog whisper?The AI searches throughout the internal structure.There arent any patterns on the word dog and the word whisper. Sad face.But remember that we have the word chien exists in there too, plus we had found that chien has an association with the word whisper. Thats good news, due to the AI associating dog and chien as essentially the same words, and luckily the word chien is associated with the word whisper.Stated overtly, you might remember those days of algebra where they kept saying if A is to B, and if B is to C, then you can reasonably conclude that A is to C. Remember those rules of life? Nifty. Here, in essence, dog is to chien, while chien is to whisper, and thus we can say that dog is also to whisper. Logic prevails.The AI is going to be able to answer the question, doing so by accessing the French words that perchance were picked up during the initial data scanning.Internally, suppose the AI has this sentence that it composes: Oui, un chien peut chuchoter. That is generally French for saying that yes, a dog can whisper.An answer was generated, scoring a victory for generative AI, but we need to do a little bit more sleuthing.Final Twist That Deals With Displaying ResultsWould you be likely to normally see a French sentence as a displayed response when using an English-focused LLM?No. Not if you are using an English-language-based LLM that is set to show principally English responses, and if you havent explicitly told the AI to start displaying in French (or whatever language). The AI might have the French sentence internally stored and then convert the French sentence over into English to display the English version to you.Thats our final twist here.Remember that the report by users is that the language switcheroo only seems to happen when the chain of thought is underway. The chances are that language switching isnt necessarily active for the chain-of-thought derivations. It is activated for the final response, but not the intervening lines of so-called reasoning.This also explains why the AI suddenly switches back out of the other language and continues forward in English thereafter.The basis for doing so is that English in this case is the predominant form of the words that were patterned on. The switch to French was merely to deal with the whisper resolution in this instance. Once that happened, and if the prompt or question had other parts to it, the AI would simply resume with the English language for the rest of the way.Boom, drop the mic (for real this time).The Logical Explanation Is Satisfying In recap, most generative AI and LLMs tend to pick up words of other languages beyond English during the initial data training and scanning of the Internet. Those words enter the massive statistical stew.They are considered fair game for use by the AI.If those non-English words are going to be helpful during generating a response to a user prompt, so be it. As they say, use any port in a storm. The AI is programmed to seek a response to a user inquiry and spanning across languages is easy-peasy. It might also be flawed, depending on how much of any other respective languages were involved in the data training.A significant clue of the o1 mystery is that the reported instances are infrequent and only seem to arise in the chain-of-thought. This can be linked to the notion that while the AI is composing a response, there isnt a need to convert from a non-English language to English. Those are just intermediary steps that are merely grist for the mill. The AI doesnt have any computational need to convert them from one language to another.Once a final result is ready, only then would a language conversion be warranted.That is then one reasonably sensible and altogether logical reason for resolving the mystery. Of course, I had mentioned at the get-go that there are other logical possibilities too. I just wanted to share an explanation that seems to cover the proper bases. Now then, some might be tempted to reject the logic-based route entirely and argue for something more sinister or incredible, perhaps ghosts hiding inside o1 or the AI is starting to take on a life of its own. Imagine all the wild possibilities.Lets end with a final thought expressed by the great Albert Einstein: Logic will get you from A to B. Imagination will take you everywhere.
    0 Σχόλια ·0 Μοιράστηκε ·39 Views
  • www.techspot.com
    In a nutshell: JavaScript is about to become a matter of legal proceedings between competing parties. Oracle claims ownership of the trademark, but the company will now have to defend its questionable position in court as the community argues it needs to be genericized. The initial attempt to cancel Oracle's ownership of the "JavaScript" trademark was unsuccessful. Deno Land and other prominent JS community members recently petitioned the United States Patent and Trademark Office (USPTO). The proponents asked the US agency to retire the trademark once and for all, but Oracle was unwilling to let go of ownership.Deno recently provided an update on the matter via X, stating that Oracle must now provide its formal answer to the petitioners. Even though they know what that answer will be, Deno and the community are prepared to prove in court that the trademark has become generic and that Oracle has little to do with the language's ongoing development.Oracle obtained the JavaScript brand after acquiring Sun Microsystems in 1997. Deno Land, the organization that manages the Deno runtime project for JavaScript and other web-based frameworks, claims that Oracle has made no valuable contributions to the programming language over the years. Deno also claims that Oracle broke the law when it renewed the trademark a few years ago.Deno alleges that Oracle illegally used a screenshot of the open-source runtime project Node.js to prove to the USPTO that it commercially owned JavaScript. The petition also stated that the corporation did not sell any actual product or service based on JS, essentially abandoning the trademark. // Related StoriesSomeone on X tried to support Oracle's position by stating that JavaScript is a widely recognized term, just like "coke." Deno countered that it is apples and oranges since the Coca-Cola company sells a product called "Coke." Other users in the thread stated that Oracle doesn't miss a single opportunity to gain more hate from the software community.At any rate, the fight over the JavaScript trademark threatens to become another troublesome tug-of-war in the tech industry comparable to the clash between Automattic and WP Engine over WordPress ownership. Oracle must formally reply by February 3, but the legal quarrel could continue well into 2026 unless it concedes.
    0 Σχόλια ·0 Μοιράστηκε ·37 Views
  • Astronauts latest stunning photo has so much going on in it
    www.digitaltrends.com
    NASA astronaut Don Pettit has been busy with his camera again. The crack photographer recently shared another stunning image, this one captured from the window of a Crew Dragon spacecraft docked at the International Space Station (ISS).One photo with: Milkyway, Zodical [sic] light, Starlink satellites as streaks, stars as pin points, atmosphere on edge showing OH emission as burned umber (my favorite Crayon color), soon to rise sun, and cities at night as streaks, Pettit wrote in a post accompanying the photo.Recommended VideosOne photo with: Milkyway, Zodical light, @Starlink satellites as streaks, stars as pin points, atmosphere on edge showing OH emission as burned umber (my favorite Crayon color), soon to rise sun, and cities at night as streaks. Taken two days ago from Dragon Crew 9 vehicle port pic.twitter.com/iCIXwgw9JB Don Pettit (@astro_Pettit) January 13, 2025Please enable Javascript to view this contentEarth is easily identified at the bottom of the picture, as are the many stars that dot the rest of the image. Look more closely, however, and youll see a number of streaks in the blackness, which Pettit identifies as SpaceX Starlink satellites that provide internet connectivity to folks back on terra firma.The Milky Way can also be seen sweeping across the center of the photo, while the zodiacal light that Pettit mentions is the faint, diffuse glow that appears as a triangular or cone-shaped illumination. This ethereal phenomenon is caused by sunlight scattering off interplanetary dust particles in our solar system.OH emission, also known as hydroxyl airglow, is a natural phenomenon that occurs in Earths upper atmosphere, specifically in the mesosphere and lower thermosphere region. Its characterized by the release of infrared radiation from excited hydroxyl (OH) molecules and can be seen lining Earth as a brownish color in Pettits image.With the ISS experiencing 16 sunrises and sunsets a day, astronauts aboard the space-based laboratory are treated to an ever-changing panorama.Across his four space missions over the last 30 years, Pettit has earned a deserved reputation for his outstanding photography work, capturing sublime images of Earth and beyond.The advent of social media has allowed him to share his work with a growing audience of fans who never quite know what hes going to post next.Surprises from his current mission, which began in September and runs through March, have included a remarkable shot of a Crew Dragon spacecraft returning to Earth at high speed at the end of a mission. He also managed to capture a Starship launch from SpaceXs Starbase site in Texas when the ISS, through sheer luck, passed overhead during liftoff.Editors Recommendations
    0 Σχόλια ·0 Μοιράστηκε ·39 Views
  • Drake Sues Universal Music Group for Defamation Over Kendrick Lamar Diss Track
    www.wsj.com
    The rapper alleges the label promoted Not Like Us to harm him and drive profits. Universal denied the allegations.
    0 Σχόλια ·0 Μοιράστηκε ·38 Views
  • This PDF contains a playable copy of Doom
    arstechnica.com
    Because we can This PDF contains a playable copy of Doom Adobe Acrobat's little-used JavaScript support gets exploited in Chromium browsers. Kyle Orland Jan 15, 2025 11:45 am | 45 Have you ever fired a BFG in a PDF? Credit: Ading2210 Have you ever fired a BFG in a PDF? Credit: Ading2210 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreHere at Ars, we're suckers for stories about hackers getting Doom running on everything from CAPTCHA robot checks and Windows' notepad.exe to AI hallucinations and fluorescing gut bacteria. Despite all that experience, we were still thrown for a loop by a recent demonstration of Doom running in the usually static confines of a PDF file.On the Github page for the quixotic project, coder ading2210 discusses how Adobe Acrobat included some robust support for JavaScript in the PDF file format. That JS coding supportwhich dates back decades and is still fully documented in Adobe's official PDF specsis currently implemented in a more limited, more secure form as part of PDFium, the built-in PDF-rendering engine of Chromium-based browsers.In the past, hackers have used this little-known Adobe feature to code simple games like Breakout and Tetris into PDF documents. But ading220 went further, recompiling a streamlined fork of Doom's open source code using an old version of Emscripten that outputs optimized asm.js code.With that code loaded, the Doom PDF can take inputs via the user typing in a designated text field and generate "video" output in the form of converted ASCII text fed into 200 individual text fields, each representing a horizontal line of the Doom display. The text in those fields is enough to simulate a six-color monochrome display at a "pretty poor but playable" 13 frames per second (about 80 ms per frame). Zooming in shows the individual ASCII characters that make up a PDF Doom frame. Credit: Ading210 Zooming in shows the individual ASCII characters that make up a PDF Doom frame. Credit: Ading210 Despite its obvious limitations in terms of sound and color, PDF Doom also suffers from text-field input that makes it nearly impossible to perform two actions simultaneously (i.e., moving and shooting). We also have to dock at least a few coolness points because the port doesn't actually work on generic desktop versions of Adobe Acrobatyou need to load it through a Chromium-based web browser. But the project gains those coolness points back with a web front-end that lets users load generic WAD files into a playable PDF.Critical quibbles aside, it's a bit wild playing a game of Doom in a format more commonly used for viewing tax documents and forms from your doctor's office. We eagerly look forward to the day that some enterprising hacker figures out a way to get a similar, playable Doom working on the actual printed PDF page that comes out of our printers.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 45 Comments
    0 Σχόλια ·0 Μοιράστηκε ·37 Views