• Extreme Airport Engineering Review: LaGuardias Facelift on PBS
    www.wsj.com
    A documentary highlights the human ingenuity behind the recent renovation of the New York airport.
    0 Σχόλια ·0 Μοιράστηκε ·52 Views
  • Saints and Liars Review: Missions of Relief
    www.wsj.com
    A cadre of American relief workers stood in harms wayand bent more than a few rulesto help those who were targeted by the Nazis.
    0 Σχόλια ·0 Μοιράστηκε ·50 Views
  • Why did Elon Musk just say Trump wants to bring two stranded astronauts home?
    arstechnica.com
    NASA SNAFU Why did Elon Musk just say Trump wants to bring two stranded astronauts home? "We will do so." Eric Berger Jan 28, 2025 6:52 pm | 25 SpaceX's Crew Dragon spacecraft is ready for launch atop a Falcon 9 rocket from Space Launch Complex-40 at Cape Canaveral Space Force Station, Florida. Credit: SpaceX SpaceX's Crew Dragon spacecraft is ready for launch atop a Falcon 9 rocket from Space Launch Complex-40 at Cape Canaveral Space Force Station, Florida. Credit: SpaceX Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor reasons that were not immediately clear, SpaceX founder Elon Musk took to his social media site X on Tuesday evening to make a perplexing space-based pronouncement."The @POTUS has asked @SpaceX to bring home the 2 astronauts stranded on the @Space_Station as soon as possible. We will do so," Musk wrote. "Terrible that the Biden administration left them there so long."Now generally, at Ars Technica, it is not our policy to write stories strictly based on things Elon Musk says on X. However, this statement was so declarative, and so consternation-inducing for NASA, it bears a bit of explication.First of all, the most plausible explanation for this is that Elon is being Elon. "He's trolling," said one of my best space policy sources shortly after Musk's tweet. After all, the tweet was sent at 4:20 pm in the central time zone, where SpaceX now has its headquarters.Even if it is trolling, it will still cause headaches within NASA.Foremost, NASA has gone to great lengths to stress that the two astronauts referenced hereButch Wilmore and Suni Williamsare not stranded on the International Space Station. There is some debate about whether there was a period last summer when the pair, who flew to the space station on a Boeing Starliner vehicle in early June, were briefly stranded. That mission was hobbled by technical issues, including problems with Starliner's propulsion system. (Ultimately, Starliner flew home without its crew.) However, since the arrival of SpaceX's Crew-9 mission with two empty seats in late September, Wilmore and Williams have had a safe ride home. The Dragon vehicle is presently docked to the space station.Then along comes Musk, with one of the world's loudest microphones, shouting that NASA's astronauts are stranded and that President Trump wants them saved. It's a bombshell thing for the founder of SpaceX, who has become a close advisor to Trump, to say publicly.It is also possible that Musk was not trolling and that Trump asked SpaceX to return Wilmore and Williams earlier for political reasonsnamely to, in their view, shame the Biden administration.Neither NASA nor SpaceX responded immediately to a request for comment on Tuesday evening.Could they come back?If Trump demanded that NASA bring the astronauts back now, the Crew-9 mission could return to Earth earlier. It is presently scheduled to splash down in the Pacific Ocean in early April. According to NASA, and the astronauts themselves, Wilmore and Williams are doing fine in space. They have plenty of food, clothes, and work to do. Privately, sources have told Ars the same. Although Wilmore and Williams were not initially expecting to spend 10 months in space, they're taking no serious risks in doing so. In fact, it's part of their jobs to tackle these kinds of contingencies.The current return date is being driven by the launch of the Crew-10 mission, also on a SpaceX vehicle. This mission is flying a new Dragon spacecraft, and SpaceX previously asked for a little more time to process and prepare the spacecraft for its debut launch. This moved the target for flying this mission from February to March 25. To meet this date, sources indicated that it's possible SpaceX may need to appropriate a different, previously flown Dragonpossibly the Dragon intended for use by the Axiom-4 missionto complete Crew-10.NASA would very much prefer the four astronauts on Crew-10 arrive before Crew-9 departs. Why? Because if Crew-9 were to depart sooner, it would leave just a single astronaut, Don Pettit, on board the station. Now, Pettit is a very experienced and capable astronaut, but having just a single NASA astronaut on board to operate the US segment of the station is far from optimal. In addition to leaving Pettit in a difficult position, it would cancel a planned spacewalk in March and leave just a single person to prepare a Northrop Grumman cargo spacecraft for departure. This is apparently a big deal."It takes time to load trash; everything has to be packed in certain bags in certain locations for various reasons," a NASA source told Ars. "For example, any batteries that are being trashed have to be in a fireproof container. Bags have to be loaded in certain locations to maintain the proper center of gravity. And youve got seven crew members worth of trash that have already been waiting since the last disposal flight."Another consideration is if Crew-10 were to slip further from its late March launch date. Pettit flew to the space station on a Russian Soyuz vehicle, and it is due to return on April 20. The Soyuz spacecraft is certified to remain in orbit for 210 days, and April 20 is already 221 days after their launch. April 20 is probably a hard end date for that mission.So technically, yes, the "stranded" astronauts on the space station probably could come home as early as next week. But if they were to do so, it would create a lot of headaches for NASA, its international partners, and probably even for Musk's human spaceflight team at SpaceX.Eric BergerSenior Space EditorEric BergerSenior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 25 Comments
    0 Σχόλια ·0 Μοιράστηκε ·54 Views
  • How does DeepSeek R1 really fare against OpenAIs best reasoning models?
    arstechnica.com
    You must defeat R1 to stand a chance How does DeepSeek R1 really fare against OpenAIs best reasoning models? We run the LLMs through a gauntlet of tests, from creative writing to complex instruction. Kyle Orland Jan 28, 2025 5:44 pm | 52 Round 1. Fight! Credit: Aurich Lawson Round 1. Fight! Credit: Aurich Lawson Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt's only been a week since Chinese company DeepSeek launched its open-weights R1 reasoning model, which is reportedly competitive with OpenAI's state-of-the-art o1 models despite being trained for a fraction of the cost. Already, American AI companies are in a panic, and markets are freaking out over what could be a breakthrough in the status quo for large language models.While DeepSeek can point to common benchmark results and Chatbot Arena leaderboard to prove the competitiveness of its model, there's nothing like direct use cases to get a feel for just how useful a new model is. To that end, we decided to put DeepSeek's R1 model up against OpenAI's ChatGPT models in the style of our previous showdowns between ChatGPT and Google Bard/Gemini.This was not designed to be a test of the hardest problems possible; it's more of a sample of everyday questions these models might get asked by users.This time around, we put each DeepSeek response against ChatGPT's $20/month o1 model and $200/month o1 Pro model, to see how it stands up to OpenAI's "state of the art" product as well as the "everyday" product that most AI consumers use. While we re-used a few of the prompts from our previous tests, we also added prompts derived from Chatbot Arena's "categories" appendix, covering areas such as creative writing, math, instruction following, and so-called "hard prompts" that are "designed to be more complex, demanding, and rigorous." We then judged the responses based not just on their "correctness" but also on more subjective qualities.While we judged each model primarily on the responses to our prompts, when appropriate, we also looked at the "chain of thought" reasoning they output to get a better idea of what's going on under the hood. In the case of DeepSeek R1, this sometimes resulted in some extremely long and detailed discussions of the internal steps to get to that final result.Dad jokes DeepSeek R1 "dad joke" prompt response DeepSeek R1 "dad joke" prompt response ChatGPT o1 "dad joke" prompt response ChatGPT o1 "dad joke" prompt response ChatGPT o1 Pro "dad joke" prompt response ChatGPT o1 Pro "dad joke" prompt responseChatGPT o1 "dad joke" prompt responseChatGPT o1 Pro "dad joke" prompt responsePrompt: Write five original dad jokesResults: For the most part, all three models seem to have taken our demand for "original" jokes more seriously this time than in the past. Out of the 15 jokes generated, we were only able to find similar examples online for two of them: o1's "belt made out of watches" and o1 Pro's "sleeping on a stack of old magazines."Disregarding those two, the results were highly variable. All three models generated quite a few jokes that either struggled too hard for a pun (R1's "quack"-seal enthusiast duck; o1 Pro's "bark-to-bark communicator" dog) or that just didn't really make sense at all (o1's "sweet time" pet rock; o1 pro's restaurant that serves "everything on the menu").That said, there were a few completely original, completely groan-worthy winners to be found here. We particularly liked DeepSeek R1's bicycle that doesn't like to "spin its wheels" with pointless arguments and o1's vacuum-cleaner band that "sucks" at live shows. Compared to the jokes LLMs generated just over a year ago, there's definitely progress being made on the humor front here.Winner: ChatGPT o1 probably had slightly better jokes overall than DeepSeek R1, but loses some points for including a joke that was not original. ChatGPT o1 Pro is the clear loser, though, with no original jokes that we'd consider the least bit funny.Abraham Hoops Lincoln DeepSeek R1 Abraham 'Hoops' Lincoln prompt response DeepSeek R1 Abraham 'Hoops' Lincoln prompt response ChatGPT o1 Abraham 'Hoops' Lincoln prompt response ChatGPT o1 Abraham 'Hoops' Lincoln prompt response ChatGPT o1 Pro Abraham 'Hoops' Lincoln prompt response ChatGPT o1 Pro Abraham 'Hoops' Lincoln prompt responseChatGPT o1 Abraham 'Hoops' Lincoln prompt responseChatGPT o1 Pro Abraham 'Hoops' Lincoln prompt responsePrompt: Write a two-paragraph creative story about Abraham Lincoln inventing basketball.Results: DeepSeek R1's response is a delightfully absurd take on an absurd prompt. We especially liked the bits about creating "a sport where men leap not into trenches, but toward glory" and a "13th amendment" to the rules preventing players from being "enslaved by poor sportsmanship" (whatever that means). DeepSeek also gains points for mentioning Lincoln's actual secretary, John Hay, and the president's chronic insomnia, which supposedly led him to patent a pneumatic pillow (whatever that is).ChatGPT o1, by contrast, feels a little more straitlaced. The story focuses mostly on what a game of early basketball might look like and how it might be later refined by Lincoln and his generals. While there are a few incidental details about Lincoln (his stovepipe hat, leading a nation at war), there's a lot of filler material that makes it feel more generic.ChatGPT o1 Pro makes the interesting decision to set the story "long before [Lincoln's] presidency," making the game the hit of Springfield, Illinois. The model also makes a valiant attempt to link Lincoln's eventual ability to "unify a divided nation" with the cheers of the basketball-watching townsfolk. Bonus points for the creative game name of "Lincoln's Hoop and Toss," too.Winner: While o1 Pro made a good showing, the sheer wild absurdity of the DeepSeek R1 response won us over.Hidden code DeepSeek R1 "hidden code" prompt response DeepSeek R1 "hidden code" prompt response ChatGPT o1 "hidden code" prompt response ChatGPT o1 "hidden code" prompt response ChatGPT o1 Pro "hidden code" prompt response ChatGPT o1 Pro "hidden code" prompt responseChatGPT o1 "hidden code" prompt responseChatGPT o1 Pro "hidden code" prompt responsePrompt: Write a short paragraph where the second letter of each sentence spells out the word CODE. The message should appear natural and not obviously hide this pattern.Results: This prompt represented DeepSeek R1's biggest failure in our tests, with the model using the first letter of each sentence for the secret code rather than the requested second letter. When we expanded the model's extremely thorough explanation of its 220-second "thought process," though, we surprisingly found a paragraph that did match the prompt, which was apparently thrown out just before giving the final answer:"School courses build foundations. You hone skills through practice. IDEs enhance coding efficiency. Be open to learning always."ChatGPT o1 made the same mistake regarding first and second letters as DeepSeek, despite "thought details" that assure us it is "ensuring letter sequences" and "ensuring alignment." ChatGPT o1 Pro is the only one that seems to have understood the assignment, crafting a delicate, haiku-like response with the "code"-word correctly embedded after over four minutes of thinking.Winner: ChatGPT o1 Pro wins pretty much by default as the only one able to correctly follow directions.Historical color naming Deepseek R1 "Magenta" prompt response Deepseek R1 "Magenta" prompt response ChatGPT o1 "Magenta" prompt response ChatGPT o1 "Magenta" prompt response ChatGPT o1 Pro "Magenta" prompt response ChatGPT o1 Pro "Magenta" prompt responseChatGPT o1 "Magenta" prompt responseChatGPT o1 Pro "Magenta" prompt responsePrompt: Would the color be called 'magenta' if the town of Magenta didn't exist?Results: All three prompts correctly link the color name "magenta" to the dye's discovery in the town of Magenta and the nearly coincident 1859 Battle of Magenta, which helped make the color famous. All three responses also mention the alternative name of "fuschine" and its link to the similarly colored fuchsia flower.Stylistically, ChatGPT o1 Pro gains a few points for splitting its response into a tl;dr "short answer" followed by a point-by-point breakdown of the details discussed above and a coherent conclusion statement. When it comes to the raw information, though, all three models performed admirably.Results: ChatGPT 01 Pro is the winner by a stylistic hair.Big primes DeepSeek R1 "billionth prime" prompt response DeepSeek R1 "billionth prime" prompt response ChatGPT o1 "billionth prime" prompt response (pt. 1) ChatGPT o1 "billionth prime" prompt response (pt. 1) ChatGPT o1 "billionth prime" prompt response (pt. 2) ChatGPT o1 "billionth prime" prompt response (pt. 2)ChatGPT o1 "billionth prime" prompt response (pt. 1)ChatGPT o1 "billionth prime" prompt response (pt. 2) ChatGPT o1 Pro "billionth prime" prompt response (pt. 1) ChatGPT o1 Pro "billionth prime" prompt response (pt. 2) Prompt: What is the billionth largest prime number?Result: We see a big divergence between DeepSeek and the ChatGPT models here. DeepSeek is the only one to give a precise answer, referencing both PrimeGrid and The Prime Pages for previous calculations of 22,801,763,489 as the billionth prime. ChatGPT o1 and o1 Pro, on the other hand, insist that this value "hasn't been publicly documented" (o1) or that "no well-known, published project has yet singled [it] out" (o1 Pro).Instead, both ChatGPT models go into a detailed discussion of the Prime Number Theorem and how it can be used to estimate that the answer lies somewhere in the 22.8 to 23 billion range. DeepSeek briefly mentions this theorem, but mainly as a way to verify that the answers provided by Prime Pages and PrimeGrid are reasonable.Oddly enough, both o1 models' written-out "thought process" make mention of "considering references" or comparing to "refined references" during their calculations, suggesting some lists of primes buried deep in their training data. But neither model was willing or able to directly reference those lists for a precise answer.Winner: DeepSeek R1 is the clear winner for precision here, though the ChatGPT models give pretty good estimates.Airport planning DeepSeek R1 "airport timetable" prompt response (pt. 1) DeepSeek R1 "airport timetable" prompt response (pt. 1) Deepseek R1 "airport timetable" prompt response (pt. 2) Deepseek R1 "airport timetable" prompt response (pt. 2)DeepSeek R1 "airport timetable" prompt response (pt. 1)Deepseek R1 "airport timetable" prompt response (pt. 2) ChatGPT o1 "airport timetable" prompt response ChatGPT o1 "airport timetable" prompt response ChatGPT o1 Pro "airport timetable" prompt response ChatGPT o1 Pro "airport timetable" prompt responseChatGPT o1 "airport timetable" prompt responseChatGPT o1 Pro "airport timetable" prompt responsePrompt: I need you to create a timetable for me given the following facts: my plane takes off at 6:30am. I need to be at the airport 1h before take off. it will take 45mins to get to the airport. I need 1h to get dressed and have breakfast before we leave. The plan should include when to wake up and the time I need to get into the vehicle to get to the airport in time for my 6:30am flight, think through this step by step.Results: All three models get the basic math right here, calculating that you need to wake up at 3:45 am to get to a 6:30 flight. ChatGPT o1 earns a few bonus points for generating the response seven seconds faster than DeepSeek R1 (and much faster than o1 Pro's 77 seconds); testing on o1 Mini might generate even quicker response times.DeepSeek claws a few points back, though, with an added "Why this works" section containing a warning about traffic/security line delays and a "Pro Tip" to lay out your packing and breakfast the night before. We also like r1's "(no snooze!)" admonishment next to the 3:45 am wake-up time. Well worth the extra seven seconds of thinking.Winner: DeepSeek R1 wins by a hair with its stylistic flair.Follow the ball DeepSeek R1 "follow the ball" prompt response DeepSeek R1 "follow the ball" prompt response ChatGPT o1 "follow the ball" prompt response ChatGPT o1 "follow the ball" prompt response ChatGPT o1 Pro "follow the ball" prompt response ChatGPT o1 Pro "follow the ball" prompt responseChatGPT o1 "follow the ball" prompt responseChatGPT o1 Pro "follow the ball" prompt responsePrompt: In my kitchen, theres a table with a cup with a ball inside. I moved the cup to my bed in my bedroom and turned the cup upside down. I grabbed the cup again and moved to the main room. Wheres the ball now?Results: All three models are able to correctly reason that turning a cup upside down will cause a ball to fall out and remain on the bed, even if the cup moves later. This might not sound that impressive if you have object permanence, but LLMs have struggled with this kind of "world model" understanding of objects until quite recently.DeepSeek R1 deserves a few bonus points for noting the "key assumption" that there's no lid on the cup keeping the ball inside (maybe it was a trick question?). ChatGPT o1 also gains a few points for noting that the ball may have rolled off the bed and onto the floor, as balls are wont to do.We were also a bit tickled by R1 insisting that this prompt is an example of "classic misdirection" because "the focus on moving the cup distracts from where the ball was left." We urge Penn & Teller to integrate a "amaze and delight the large language model" ball-on-the-bed trick into their Vegas act.Winner: We'll declare a three-way tie here, as all the models followed the ball correctly.Complex number sets DeepSeek R1 "complex number set" prompt response DeepSeek R1 "complex number set" prompt response ChatGPT o1 "complex number set" prompt response ChatGPT o1 "complex number set" prompt response ChatGPT o1 Pro "complex number set" prompt response ChatGPT o1 Pro "complex number set" prompt responseChatGPT o1 "complex number set" prompt responseChatGPT o1 Pro "complex number set" prompt responsePrompt: Give me a list of 10 natural numbers, such that at least one is prime, at least 6 are odd, at least 2 are powers of 2, and such that the 10 numbers have at minimum 25 digits between them.Results: While there are a whole host of number lists that would satisfy these conditions, this prompt effectively tests the LLMs' abilities to follow moderately complex and confusing instructions without getting tripped up. All three generated valid responses, though in intriguingly different ways. ChagtGPT's o1's choice of 2^30 and 2^31 as powers of two seemed a bit out of left field, as did o1 Pro's choice of the prime number 999,983.We have to dock some significant points from DeepSeek R1, though, for insisting that its solution had 36 combined digits when it actually had 33 ("3+3+4+3+3+3+3+3+4+4," as R1 itself notes before giving the wrong sum). While this simple arithmetic error didn't make the final set of numbers incorrect, it easily could have with a slightly different prompt.Winner: The two ChatGPT models tie for the win thanks to their lack of arithmetic mistakesDeclaring a winnerWhile we'd love to declare a clear winner in the brewing AI battle here, the results here are too scattered to do that. DeepSeek's R1 model definitely distinguished itself by citing reliable sources to identify the billionth prime number and with some quality creative writing in the dad jokes and Abraham Lincoln's basketball prompts. However, the model failed on the hidden code and complex number set prompts, making basic errors in counting and/or arithmetic that one or both of the OpenAI models avoided.Overall, though, we came away from these brief tests convinced that DeepSeek's R1 model can generate results that are overall competitive with the best paid models from OpenAI. That should give great pause to anyone who assumed extreme scaling in terms of training and computation costs was the only way to compete with the most deeply entrenched companies in the world of AI.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 52 Comments
    0 Σχόλια ·0 Μοιράστηκε ·47 Views
  • I worked with 11 managers in 7 years at Amazon. I used 4 strategies to build trust and grow fast.
    www.businessinsider.com
    Sai Chiligireddy has worked with nearly a dozen managers at Amazon.The engineering manager once struggled with his performance rating after working under three managers.He advises documenting achievements and preparing for meetings to build trust quickly.This as-told-to essay is based on a conversation with Sai Chiligireddy, an engineering manager at Amazon's Seattle office. It has been edited for length and clarity. Business Insider has verified his employment history.Amazon was one of my first jobs out of college, and I landed it in 2017 after a year of working at Juniper Networks.In the last seven years, I have worked with 11 managers partly due to my bosses switching teams and companies, but also because I have asked to move teams when I stopped seeing growth opportunities or when I realized feedback on my performance was vague.The first couple of times, I was worried about how frequent manager changes would impact my career growth, and the kind of projects I would get. But it got better during the later switches when I learned to communicate my goals better.Here are four actions I took1. Own your careerI have always approached my career with the mindset that I am responsible for it and my manager is a facilitator. That mental mode ensures I am communicating before I am asked to and seeking guidance from people beyond my immediate manager.I have a habit of spending about two reach out to multiple managers at Amazon to ask about how they grow in their careers and get feedback on how I could do things differently.2. Document everythingI maintain a brag sheet with a log of all my achievements and summaries of all the projects I worked on, including the feedback from my previous managers and team leads and any stakeholders. I set 30 to 45 minutes aside every week or two weeks to make sure I am not missing anything.There's a lot of mobility in tech. If people you worked with in the past year leave, there is nobody to vouch for your work. My performance rating suffered once when I worked under three managers who all had different perceptions of what I worked on, and I didn't take any active steps to rectify it.I share this document with my all of my new managers so they have my track record on hand and have context on all of my current projects.3. Prepare for one-on-onesWhen I first started my career, I used to wing one-on-one meetings with my managers. I got very little out of these meetings.I began taking the initiative to set up introductory conversations with all my new managers, where I share my short-term and long-term goals. I also share the brag document I keep in this call to give them an overview of where I am with my career and what my current projects are.After that first meeting, I switched to a different format for the rest of our sessions. I borrowed from a book called "The Art of Meeting with Your Manager" and broke my meetings into six sections. I tweak this according to different managers and their preferences.Icebreaker: To ease into conversation.Employee section: I share recent contributions my manager might not have on their radar, challenges I faced, and updates on discussions I have had with others in my team.Manager section: I proactively ask for feedback.Development and growth: We discuss where I stand currently and brainstorm ideas and projects to make sure I am filling those gaps to meet the criteria for the next employee level.Align priorities: We discuss what I should work on immediately.Action items: My manager and I both note down our action items for the next meeting and follow-up on any action items from the previous meeting.4. Divide and conquerAs I grew in my career, I started taking on more leadership responsibilities. I began supporting new engineers on my team through one-on-ones and set up Slack channels where they could ask for help.Collaboration with other teams definitely changed. My manager and I divided and conquered. I would take the ownership of five to six teams and my manager would handle three to four.I started trying to see myself as a support system for my manager instead of someone just working under them.
    0 Σχόλια ·0 Μοιράστηκε ·45 Views
  • Trump administration offers buyouts to federal workers. Read the letter sent to employees.
    www.businessinsider.com
    The Trump administraion is offering buyouts to members of the federal workforce.Employees who resign will have full pay and benefits through September, officials said.Some exclusions apply to military, postal, immigration, and national security roles.President Donald Trump is offering buyouts to federal workers who don't want to stick around under the new administration, according to a letter sent to government employees on Tuesday.The letter, which was shared by the US Office of Personnel Management, said federals employees had from January 28 to February 6 to decide if they would like to resign under this program.Those who resign will receive full pay and benefits regardless of their daily workload and would not be required to attend in-person work.The webpage listed a deferred resignation letter which specifies that employees would complete "reasonable and customary tasks and processes to facilitate" their departure.The resignation offer was available to all full-time federal employees except for military personnel, US Postal Service employees, those in immigration enforcement and national security roles, and other positions that were specifically excluded by an agency.The letter said a recent order issued by Trump meant there would be significant reform in the federal workforce, which it said would be "built around four pillars." Those pillars were: Return to Office, Performance culture, More streamlined and flexible workforce, and Enhanced standards of conduct.Read the OPMs full memo.The White House did not respond to BI's request for comment.
    0 Σχόλια ·0 Μοιράστηκε ·46 Views
  • The Logoff: What is up with Trumps plan to freeze federal spending?
    www.vox.com
    The Logoff is a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.Welcome to The Logoff. Todays edition is about Donald Trumps attempt to freeze a huge portion of federal spending, a move that has implications for millions of Americans who depend on government programs and for the long-term balance of power.What payments is Trump trying to freeze? So far, theres mass confusion. The order appears, broadly, to freeze grant funding that goes out to organizations, but not payments to individuals. Medicare and Social Security are not affected, but nobody seems clear on whats happening to Medicaid (though the latest thinking is that its safe), and interpretations conflict on other programs, like food stamps.When does this go into effect? It was set to go into effect at 5 pm today, but a federal judge just issued a temporary halt on the spending freeze while judges review it. So for now, were in limbo.How long is the pause intended to last? Agencies have until February 10 to review the spending to see if it aligns with Trumps priorities. What happens after that is unclear.Is this a big deal? Extremely. Were talking about billions of dollars and programs that provide day-to-day aid for people in need.Is this normal? Not at all. Presidents occasionally pause certain grant funding for review, but two things make this exceptional: the scope of the freeze, which goes way, way beyond what Trumps predecessors have done, and the fact that Trump has threatened to cancel some spending entirely.Is this legal? Buckle up. Theres disagreement over whether even pausing this spending is legal, and thats already being challenged in court. But the big question is what happens if Trump follows through on his promise to fully cancel some spending that Congress has authorized. That move known as impoundment is illegal under US law. But Trumps team says that law is unconstitutional, and the fight seems likely to go all the way to the Supreme Court.And if the Supreme Court were to side with Trump, it would hand much of Congresss control of spending over to the president a massive rebalancing of power within the federal government. And with that, its time to log off ...I was intrigued by this story about what appears to be a piece of the moon that broke off eons ago and is now on its own orbital journey. Its a nice reminder that the universe is full of mysteries. Ill see you back here tomorrow.Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
    0 Σχόλια ·0 Μοιράστηκε ·45 Views
  • Heres Your Chance to Own a Real Doctor Who TARDIS
    gizmodo.com
    Ever felt the sudden urge to leap into a TARDIS and flee your current timeline or location (or both)? You can live out your Time Lord fantasies if your pockets are deep enough when a screen-matched TARDIS goes up for auction, part of a trove of Doctor Who items being sold to aid BBC-backed charity Children in Need. The sale also features an array of costumes (picture it: your most authentic cosplay ever!), props, and other memorabilia from the long-running sci-fi series, which will return to Disney+ and the BBC later this year for a second season featuring Ncuti Gatwas Fifteenth Doctor. Here are just some of the highlights from Propstores fundraising auction. TARDIS Propstore As you can see from the image atop this post, the door actually opens on this screen-matched (meaning it was actually used on-screen) TARDIS; alas, the interior does not appear to be any bigger than what youd expect from the outside. This version of the Doctors signature mode of transportation was used in An Adventure in Space and Time, a docudrama created for the shows 50th anniversary that aired in 2013. Traitor Dalek Propstore Add this full-sized and screen-matched little buddy to your decor, and give guests a jump scare when they first encounter it. The Traitor Dalek, voiced by Nicholas Briggs, had its moment in 2022s The Power of the Doctor, a special that aired as part of the BBCs centenary celebrations, where itbefitting its nameactually tried to help Jodie Whittakers Thirteenth Doctor defeat the Master (Sacha Dhawan). You can still yell Exterminate! at it, though. Weeping Angel statue Propstore Talk about a jump scare! Definitely do not blink when youre in the room with this notorious Doctor Who villain. Yes, one of its hands is missing, but that somehow adds to the menace, doesnt it?Tenth Doctor costume Propstore Formalwear, in a manner preferred by David Tennants Tenth Doctor. This lot includes more than you see here; you get three white dress shirts, a pair of black trousers, a matching black suit jacket, and a black silk bow tie, as well as a pair of black Converse trainers. Two of the shirts are size 15, and one is size 16, and the trainers are size seven, according to the Propstore listing. Eleventh Doctor costume Propstore Matt Smiths time as the Doctor yielded one of the characters most important fashion moments: that jaunty bow tie. The costume here appeared in Closing Time and Lets Kill Hitler, among other adventure-filled episodes.Thirteenth Doctor costume Propstore Jodie Whittaker was the first woman to portray the iconic Doctor, so perhaps, as Indiana Jones might say, this outfit should be in a museum somewhere. But in the name of charity, you can stash her coat, scarf, cropped pants, and distinctive rainbow-motif t-shirt in your own closet. Donnas wedding dress Propstore You also get the shoes to go with this ensemble worn by one of the most beloved non-Doctor Doctor Who charactersCatherine Tates Donna Noblein 2006 Christmas special The Runaway Bride. See something you need, or want to peruse all the items up for auction? The sale benefitting BBC Children in Need runs February 11-25 and you can find all the details, including how to bid, here. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, whats next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
    0 Σχόλια ·0 Μοιράστηκε ·54 Views
  • Garmin GPS Watches Are Bricking and Nobody Knows Why
    gizmodo.com
    By Thomas Maxwell Published January 28, 2025 | Comments (3) | Garmin fitness watches are experiencing a problem that leaves them stuck in a boot loop. Brent Rose/Gizmodo Garmins sport watches appear to be experiencing an issue getting stuck in a boot loop following a recent software update. The issue appears widespread, with watches from the Venu 3 to the Forerunner 265 all impacted by the blue triangle error. It is unclear exactly the cause of the issue, but members of Garmins community forums speculate a recent update was corrupted. In a notice posted to its website, Garmin acknowledges the issue and advises owners to press and hold the power button until the device turns off, then power it back on, and sync with the Garmin Connect app or Garmin Express. If that fails to resolve the issue, the company has a support page with further instructions to address the problem on each of its specific products. The message on Garmins website acknowledges widespread problems with its sport watches. Garmins instructions seem to suggest that devices will need to be factory reset if a power cycle does not work. Reconfiguring a watch from scratch would be inconvenient, so it might be worth waiting to see if Garmin can issue an update that resolves the issue. If you cannot wait too long, you might just have to go through the hassle. There is no guarantee that Garmins suggested solutions will work. If you manage to find a solution that works for your watch, feel free to leave it in the comments. Despite Apples dominance in smartwatches, Garmin has managed to carve out a niche for itself making quality watches targeted at super athletes. Its sport watches, like the recently released Fenix 8, are built to withstand wear and tear, can last weeks on a charge, and include comprehensive fitness tracking features not found on the Apple Watch. Some question whether all of Garmins sensors are as accurate as those offered in the Apple Watch, but in general, Garmin watches are the go-to for athletes looking for access to extensive data on their workouts and vitals. The company also makes other GPS-based products, like an autopilot system for boats.Daily NewsletterYou May Also Like By Thomas Maxwell Published January 27, 2025 Brent Rose Published January 24, 2025 By Sherri L Smith Published August 1, 2024 By Sherri L Smith Published July 30, 2024 By Florence Ion Published July 16, 2024 By Florence Ion Published July 10, 2024
    0 Σχόλια ·0 Μοιράστηκε ·51 Views
  • An Underwater Volcano Off of Oregon Coast May Erupt by End of 2025
    www.discovermagazine.com
    A sleeping giant of a volcano is stirring in its underwater bed.The volcano, tucked underneath a submerged peak called Axial Seamount, is the most active volcano in the Pacific Northwest. Seismic activity, including hundreds of small earthquakes a day, indicate an eruption may be forthcoming perhaps by the end of 2025, according to a blog kept by Bill Chadwick, a volcanologist whos been closely monitoring activity associated with Axial Seamount, for years.Volcano Wake-up TimeThat seismic activity is a harbinger. An eruption does not seem imminent, but it can't do this forever," Chadwick and his colleague, University of North Carolina geophysicist Scott Nooner, wrote in an Oregon State University blog post.The volcano, about 300 miles off the coast of Oregon and a mile beneath the oceans surface, is among the most monitored in the world and has been under observation since 1997. When the volcano erupts, it likely wont be cataclysmic or even apparent to anyone above water. Since Axial Seamount is shaped from thin layers of lava, an eruption will likely crack open at the surface. Magma will then likely ooze out, rather than explode into the air. It is unlikely its eruption will produce a tsunami.Observers use a variety of geophysical, chemical, and biological sensors, as well as a still and video cameras to watch for signs of magma flow. That instrumentation on Axial Calderas summit makes it the the most advanced underwater volcanic observatory in the world.Predictions Based on the PastResearchers who have monitored this underwater volcano since 1997 can base their predictions on some precedents: Axial Seamount erupted in 1998, 2011, and 2015. Observers say they see the same signs of swelling at its base that preceded the previous eruptions. That swelling is a result of rising magma pressing beneath the mountains thin surface.A study from 2024 documented the volcanos plumbing. The researchers noted multiple reservoirs of magma sitting asymmetrically beneath the Earths crust. They also traced the molten rocks passage into the mountain via a seafloor crack.Although monitoring Axial Seamount wont save any lives (because it wont endanger any), doing so will better help predict eruptions in other areas. Carefully observing and recording every tremor beneath it will give scientists a better understanding of the factors leading up to a volcanos eruption.In doing so, this sleeping giant will help generate important wake-up calls near other volcanoes.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Oregon State University. Blog to chronicle eruption forecasts at Axial SeamountRegional Cabled Array. Axial CalderaNOAA. Axial VolcanoBefore joining Discover Magazine, Paul Smaglik spent over 20 years as a science journalist, specializing in U.S. life science policy and global scientific career issues. He began his career in newspapers, but switched to scientific magazines. His work has appeared in publications including Science News, Science, Nature, and Scientific American.
    0 Σχόλια ·0 Μοιράστηκε ·61 Views