• ARSTECHNICA.COM
    After embarrassing blunder, AT&T promises bill credits for future outages
    AT&T's complicated promise After embarrassing blunder, AT&T promises bill credits for future outages If you lost service but only 9 cell towers went down, you won't get a bill credit. Jon Brodkin Jan 8, 2025 3:10 pm | 2 Credit: Getty Images | Bloomberg Credit: Getty Images | Bloomberg Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAT&T, following last year's embarrassing botched update that kicked every device off its wireless network and blocked over 92 million phone calls, is now promising full-day bill credits to mobile customers for future outages that last at least 60 minutes and meet certain other criteria. A similar promise is being made to fiber customers for unplanned outages lasting at least 20 minutes, but only if the customer uses an AT&T-provided gateway.The "AT&T Guarantee" announced today has caveats that can make it possible for a disruption to not be covered. AT&T says the promised mobile bill credits are "for wireless downtime lasting 60 minutes or more caused by a single incident impacting 10 or more towers."The full-day bill credits do not include a prorated amount for the taxes and fees imposed on a monthly bill. The "bill credit will be calculated using the daily rate customer is charged for wireless service only (excludes taxes, fees, device payments, and any add-on services," AT&T said. If an outage lasts more than 24 hours, a customer will receive another full-day bill credit for each additional day.If only nine or fewer AT&T towers aren't functioning, a customer won't get a credit even if they lose service for an hour. The guarantee kicks in when a "minimum 10 towers [are] out for 60 or more minutes resulting from a single incident," and the customer "was connected to an impacted tower at the time the outage occurs," and "loses service for at least 60 consecutive minutes as a result of the outage."AT&T will decide whether outage is really an outageThe guarantee "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, or outages caused by third parties." AT&T says it will determine "in its sole discretion" whether the disruption is "a qualifying" network outage."Consumers will automatically receive a bill credit equaling a full day of service and we'll reach out to our small business customers with options to help make it right," AT&T said. When there's an outage, AT&T said it will "notify you via e-mail or SMS to inform you that you've been impacted. Once the interruption has been resolved, we'll contact you with details about your bill credit." If AT&T fails to provide the promised credit for any reason, customers will have to call AT&T or visit an AT&T store.To qualify for the similar fiber-outage promise, "customers must use AT&T-provided gateways," the firm said. There are other caveats that can prevent a home Internet customer from getting a bill credit. AT&T said the fiber-outage promise "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, loss of service due to downed or cut cable wires at a customer residence, issues with wiring inside customer residence, and power outages at customer premises. Also excludes outages resulting from planned maintenance."AT&T notes that some residential fiber customers in multi-dwelling units "have an account with AT&T but are not billed by AT&T for Internet service." In the case of outages, these customers would not get bill credits but would be given the option to redeem a reward card that's valued at $5 or more.Botched network updateIn February 2024, AT&T caused a major outage by botching a network update and took over 12 hours to fully restore service. At the time, AT&T said it was automatically issuing credits to affected customers "for the average cost of a full day of service.""All voice and 5G data services for AT&T wireless customers were unavailable, affecting more than 125 million devices, blocking more than 92 million voice calls, and preventing more than 25,000 calls to 911 call centers," the Federal Communications Commission said in a report after a months-long investigation into the incident.The FCC report said the nationwide outage began three minutes after "AT&T Mobility implemented a network change with an equipment configuration error." This error caused the AT&T network "to enter 'protect mode' to prevent impact to other services, disconnecting all devices from the network."The FCC found various problems in AT&T's processes that increased the likelihood of an outage and made recovery more difficult than it should have been. The agency described "a lack of adherence to AT&T Mobility's internal procedures, a lack of peer review, a failure to adequately test after installation, inadequate laboratory testing, insufficient safeguards and controls to ensure approval of changes affecting the core network, a lack of controls to mitigate the effects of the outage once it began, and a variety of system issues that prolonged the outage once the configuration error had been remedied."AT&T said it implemented changes to prevent the same problem from happening again. The company could face punishment, but it's less likely to happen under Trump's pick to chair the FCC, Brendan Carr, who is taking over soon. The Biden-era FCC compelled Verizon Wireless to pay a $1,050,000 fine and implement a compliance plan because of a December 2022 outage in six states that lasted one hour and 44 minutes.An AT&T executive told Reuters that the company has been trying to regain customers' trust over the past few years with better offers and product improvements. "Four years ago, we were losing share in the industry for a significant period of time... we knew we had lost our customers' trust," Reuters quoted AT&T Executive VP Jenifer Robertson as saying in an article today.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 2 Comments
    0 Yorumlar 0 hisse senetleri 139 Views
  • ARSTECHNICA.COM
    How I program with LLMs
    learning the machine How I program with LLMs Generative models can be powerfully usefulif you're willing to adapt your approach. David Crawshaw Jan 8, 2025 2:51 pm | 11 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThis piece was originally published on David Crawshaws blog and is reproduced here with permission.This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them. The result has been that I now regularly use LLMs while working, and I consider their benefits to be net-positive on my productivity. (My attempts to go back to programming without them are unpleasant.)Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev. Its very early, but so far, the experience has been positive.BackgroundI am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested and make solid progress.The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap. Having the Internet all the time was astonishing and felt like the future. Probably far more to me in that moment than to many who had been on the Internet longer at universities because I was immediately dropped into high Internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that.So I followed this curiosity to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be "yes"generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are useless. But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far.OverviewThere are three ways I use LLMs in my day-to-day programming:Autocomplete. This makes me more productive by doing a lot of the more-obvious typing for me. It turns out that the current state of the art can be improved on here, but thats a conversation for another day. Even the standard products you can get off the shelf are better for me than nothing. I convinced myself of that by trying to give them up. I couldn't go a week without getting frustrated by how much mundane typing I had to do before having a FIM model. This is the first place to experiment.Search. If I have a question about a complex environment, say how do I make a button transparent in CSS, I will get a far better answer asking any consumer-based LLMo1, sonnet 3.5, etc.than I do using an old-fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people. The other day, I put my shoe on my head and asked my two-year-old what she thought of my hat. She dealt with it and gave me a proper scolding. I can deal with LLMs being wrong sometimes, too.)Chat-driven programming. This is the hardest of the three. This is where I get the most value of LLMs, but it's also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle, I dont like that. It requires at least as much messing about to get value out of LLM chat as it does to learn to use a slide rule, with the added annoyance that it is a non-deterministic service that is regularly changing its behavior and user interface. Indeed, the long-term goal in my work is to replace the need for chat-driven programming to bring the power of these models to a developer in a way that is not so off-putting. But as of now, I am dedicated to approaching the problem incrementally, which means figuring out how to do best with what we have and improve it.As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.The rest of this is about extracting value from chat-driven programming.Why use chat at all?Let me try to motivate the skeptical. A lot of the value I get out of chat-driven programming is that I reach a point in the day when I know what needs to be written, I can describe it, but I dont have the energy to create a new file, start typing, and then start looking up the libraries I need. (Im an early-morning person, so this is usually any time after 11 am for me, though it can also be any time I context-switch into a different language/framework/etc.) LLMs perform that service for me in programming. They give me a first draft with some good ideas and several of the dependencies I needand often some mistakes. Often, I find fixing those mistakes is a lot easier than starting from scratch.This means chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days, I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning. If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I dont think any of my observations here will be useful to you.Chat-based LLMs do best with exam-style questionsGive an LLM a specific objective and all the background material it needs so it can craft a well-contained code review packet and expect it to adjust as you question it. There are two major elements to this:Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. This is why I have had little success with chat inside my IDE. My workspace is often messy, the repository I am working on is by default too large, and it is filled with distractions. One thing humans appear to be much better than LLMs at (as of January 2025) is not getting distracted. That is why I still use an LLM via a web browserbecause I want a blank slate on which to craft a well-contained request.Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. You can ask an LLM to do things you would never ask a human to do. Rewrite all of your new tests introducing an <intermediate concept designed to make the tests easier to read> is an appalling thing to ask a human; youre going to have days of tense back-and-forth about whether the cost of the work is worth the benefit. An LLM will do it in 60 seconds and not make you fight to get it done. Take advantage of the fact that redoing work is extremely cheap.The ideal task for an LLM is one where it needs to use a lot of common libraries (more than a human can remember, so it is doing a lot of small-scale research for you), working to an interface you designed, or making it produce a small interface you can verify as sensible quickly, and it can write readable tests. Sometimes this means choosing the library for it if you want something obscure (though with open source code, LLMs are quite good at this).You always need to pass an LLMs code through a compiler and run the tests before spending time reading it. They all produce code that doesnt compile sometimes (always making errors I find surprisingly humanevery time I see one, I think, "There but for the grace of God go I.") The better LLMs are very good at recovering from their mistakes; often, all they need is for you to paste the compiler error or test failure into the chat, and they fix the code.Extra code structure is much cheaperThere are vague tradeoffs we make every day around the cost of writing, the cost of reading, and the cost of refactoring code. Lets take Go package boundaries as an example. The standard library has a package net/http that contains some fundamental types for dealing with wire format encoding, MIME types, etc. It contains an HTTP client and an HTTP server. Should it be one package or several? Reasonable people can disagree! So much so that I do not know if there is a correct answer today. What we have works; after 15 years of use, it is still not clear to me that some other package arrangement would work better.The advantages of a larger package include centralized documentation for callers, easier initial writing, easier refactoring, and easier sharing of helper code without devising robust interfaces for them (which often involves pulling the fundamental types of a package out into yet another leaf package filled with types). The disadvantages include the package being harder to read because many different things are going on (try reading the net/http client implementation without tripping up and finding yourself in the server code for a few minutes), or it being harder to use because there is too much going on in it. For example, I have a codebase that uses a C library in some fundamental types, but parts of the codebase need to be in a binary widely distributed to many platforms that does not technically need the C library, so it has more packages than you might expect in the codebase isolating the use of the C library to avoid cgo in the multi-platform binary.There are no right answers here. Instead, we are trading off different types of work that an engineer will have to do (upfront and ongoing). LLMs influence those tradeoffs:As LLMs do better with exam-style questions, more and smaller packages make it easier to give a complete yet isolated context for a piece of work. This is true for humans, too, which is why we use packages at all, but we trade off package size against the extra typing/plumbing/filing to make more readable code. With an LLM both doing and benefiting from a big chunk of that extra work, the tradeoff shifts. (As a bonus, we humans get more readable code!)Smaller and more numerous packages can be compiled and tested independently of unrelated code. This is helpful for the LLM dev cycle, as it means changes can be compiled and tested before other dependent packages are refactored, and the package has a simpler test environment.An exampleLet me work an example to combine a few of the discussed ideas:Write a reservoir sampler for the quartiles of floats.First off is package structure. Were I doing this before LLMs, I would have chosen to have some sort of streamstat package that contained several algorithms, maybe one per file. This does not seem to be a unique opinion; here is an open source quantile package following that model. Now, I want just this one algorithm in its own package. Other variants or related algorithms can have their own package.Next up, what do we get from an LLM? The first pass is not bad. That prompt, with some details about wanting it in Go, got me quartile_sampler.go:// QuartileSampler maintains a reservoir sample to estimate quartiles of a stream // of float64 values. It uses reservoir sampling to maintain a representative // sample when processing large streams of data. type QuartileSampler struct { reservoir []float64 seen int maxSize int rng *rand.Rand }// NewQuartileSampler creates a new sampler that maintains up to maxSize samples. // A larger maxSize provides more accurate estimates but uses more memory. func NewQuartileSampler(maxSize int) *QuartileSampler { }The core interface is good, too:// Add adds a new value to the sampler. func (qs *QuartileSampler) Add(value float64) { }// Quartiles returns the estimated 25th, 50th (median), and 75th percentiles // of the data seen so far. Returns (0,0,0) if no data has been added. func (qs *QuartileSampler) Quartiles() (q1, median, q3 float64) { if len(qs.reservoir) == 0 { return 0, 0, 0 } // Make a copy to avoid modifying the reservoir sorted := make([]float64, len(qs.reservoir)) copy(sorted, qs.reservoir) slices.Sort(sorted) n := len(sorted) q1 = percentile(sorted, 0.25) median = percentile(sorted, 0.50) q3 = percentile(sorted, 0.75) return q1, median, q3 }Great! There are also tests.An aside: this may be the place to stop. Sometimes I use LLM codegen as a form of specialized search. E.g., Im curious about reservoir sampling, but want to see how the algorithm would be applied under some surprising constraintfor example, time-windowed sampling. Instead of doing a literature search, I might amend my prompt for an implementation that tracks freshness. (I could also ask it to include references to the literature in the comments, which I could manually check to see if its making things up or if theres some solid research to work from.)I often spend 60 seconds reading some generated code, see an obvious trick I hadnt thought of, then throw it away and start over. Now I know the trick is possible. This is why it is so hard to attribute value generated by LLMs. Yes sometimes it makes bad code, gets stuck in a rut, makes up something impossible (it hallucinated a part of the monaco API I wish existed the other day), and wastes my time. It can also save me hours by pointing out something relevant I dont know.Back to the code. Fascinatingly, the initial code produced didnt compile. In the middle of the Quartiles implementation, there was the line:n := len(sorted)That is a fine line; sorted is a slice defined a few lines earlier. But the value is never used, so gopls (and the Go compiler if you run go build) immediately says:declared and not used: nThis is a very easy fix. If I paste the error back into the LLM, it will correct it. Though in this case, as Im reading the code, its quite clear to me that I can just delete the line myself, so I do.Now the tests. I got what I expected. In quartile_sampler_test.go:func TestQuartileSampler(t *testing.T) { tests := []struct { name string data []float64 wantQ1 float64 wantMed float64 wantQ3 float64 epsilon float64 // allowed error for floating point comparison }{ { name: "basic sequence", data: []float64{1, 2, 3, 4, 5, 6, 7}, wantQ1: 2, wantMed: 4, wantQ3: 6, epsilon: 0.1, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { qs := NewQuartileSampler(100) // Use large reservoir for accurate testing for _, v := range tt.data { qs.Add(v) } q1, med, q3 := qs.Quartiles() if math.Abs(q1-tt.wantQ1) > tt.epsilon { t.Errorf("Q1 = %v, want %v (%v)", q1, tt.wantQ1, tt.epsilon) } if math.Abs(med-tt.wantMed) > tt.epsilon { t.Errorf("Median = %v, want %v (%v)", med, tt.wantMed, tt.epsilon) } if math.Abs(q3-tt.wantQ3) > tt.epsilon { t.Errorf("Q3 = %v, want %v (%v)", q3, tt.wantQ3, tt.epsilon) } }) }}Exactly the sort of thing I would write! I would run some cases through another implementation to generate expected outputs and copy them into a test like this. But there are two issues with this.The first is the LLM did not run these numbers through another implementation. (To the best of my knowledge. When using a sophisticated LLM service, it is hard to say for sure what is happening behind the scenes.) It made them up, and LLMs have a reputation for being weak at arithmetic. So this sort of test, while reasonable for a human to write because we base it on the output of another toolor, if we are particularly old-school, do some arithmetic ourselvesis not great from an LLM.The second issue is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable tradeoff; there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm.Lets ask for an even better test.In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test, too.This got us some new test code:// referenceQuartiles calculates the exact quartiles for a slice of float64 values // using linear interpolation, matching the behavior expected from the sampler. func referenceQuartiles(data []float64) (q1, median, q3 float64) { }// compareQuartiles checks if two sets of quartiles are within epsilon of each other. // Returns true if they match within the tolerance, false otherwise. func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool { }// checkQuartiles is a test helper that compares sampler output against the reference // implementation and reports any differences. func checkQuartiles(t *testing.T, data []float64, epsilon float64) { t.Helper() // Get reference values wantQ1, wantMed, wantQ3 := referenceQuartiles(data) // Get sampler values using a large reservoir for accuracy qs := NewQuartileSampler(1000) for _, v := range data { qs.Add(v) } gotQ1, gotMed, gotQ3 := qs.Quartiles() if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) { t.Errorf("Quartiles mismatch:\ngot (q1=%v, med=%v, q3=%v)\nwant (q1=%v, med=%v, q3=%v)\nepsilon=%v", gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) } }The original test from above has been reworked to to use checkQuartiles, and we have something new:func FuzzQuartileSampler(f *testing.F) { // Add some seed corpus f.Add([]float64{1, 2, 3, 4, 5}) f.Fuzz(func(t *testing.T, data []float64) { // Use a larger epsilon for fuzzing since we might get more extreme values checkQuartiles(t, data, 0.2) }) }This is fun because it's wrong. My running gopls tool immediately says:fuzzing arguments can only have the following types: string, bool, float32, float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, []bytePasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a func(t *testing.T, data []byte) function that uses math.Float64frombits to extract floats from the data slice. Interactions like this point us toward automating the feedback from tools; all it needed was the obvious error message to make solid progress toward something useful. I was not needed.Doing a quick survey of the last few weeks of my LLM chat history shows (which, as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80 percent of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time, it can completely resolve the issue without me saying anything of note. I am just acting as the messenger.Where are we going? Better tests, maybe even less DRYThere was a programming movement some 25 years ago focused on the principle dont repeat yourself. As is so often the case with short snappy principles taught to undergrads, it got taken too far. There is a lot of cost associated with abstracting out a piece of code so it can be reused; it requires creating intermediate abstractions that must be learned, and it requires adding features to the factored-out code to make it maximally useful to the maximum number of people, which means we depend on libraries filled with useless distracting features.The past 1015 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code. It is far less common for me to write on a code review, This isnt worth it, separate the implementations. (Which is fortunate, because people really dont want to hear things like that after they have done all the work.) Programmers are getting better at tradeoffs.What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didnt have the hours to build properly. You can spend a lot more time writing tests to be readable because the LLM is not sitting there constantly thinking, It would be better for the company if I went and picked another bug off the issue tracker than doing this. So the tradeoff shifts in favor of having more specialized implementations.The place where I expect this to be most visible is language-specific REST API wrappers. Every major company API comes with dozens of these (usually low-quality) wrappers written by people who arent actually using their implementations for a specific goal and are instead trying to capture every nook and cranny of an API in a large and complex interface. Even when it is done well, I have found it easier to go to the REST documentation (usually a set of curl commands) and implement a language wrapper for the 1 percent of the API I actually care about. It cuts down the amount of the API I need to learn upfront, and it cuts down how much future programmers (myself) reading the code need to understand.For example, as part of my recent work on sketch.dev, I implemented a Gemini API wrapper in Go. Even though the official wrapper in Go has been carefully handcrafted by people who know the language well and clearly care, there is a lot to read to understand it:$ go doc -all genai | wc -l 1155My simplistic initial wrapper was 200 lines of code totalone method, three types. Reading the entire implementation is 20 percent of the work of reading the documentation of the official package, and if you try to dig into its implementation, you will discover that it is a wrapper around another largely code-generated implementation with protos and grpc and the works. All I want is to cURL and parse a JSON object.There obviously comes a point in a project where Gemini is the foundation of the entire app, where nearly every feature is used, where building on gRPC aligns well with the telemetry system elsewhere in your organization, and where you should use the large official wrapper. But most of the time, it's so much more time consuming to do so because we almost always want only some wafer-thin sliver of whatever API we need to use today. So custom clients, largely written by a GPU, are far more effective for getting work done.So I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.Automating these observations: sketch.devAs a programmer, my instinct is to make computers do work for me. It's a lot of work getting value out of LLMshow can a computer do it?I believe the key to solving a problem is not to overgeneralize. Solve a particular problem and then expand slowly. So instead of building a general-purpose UI for chat programming that is just as good at COBOL as it is for Haskell, we want to focus on one particular environment. The bulk of my programming is in Go, so what I want is easy to imagine for a Go programmer:Something like the Go playground, built around editing a package and testsA chat interface onto editable codeA little UNIX env where we can run go get and go testgoimports integrationgopls integrationAutomatic model feedback: on model edit run go get, go build, go test, feedback missing packages, compiler errors, test failures to the model to try and get them fixed automaticallyA few of us have built an early prototype of this: sketch.dev.The goal is not to be a Web IDE but rather to challenge the notion that chat-based programming even belongs in what is traditionally called an IDE. IDEs are collections of tools arranged for people. It is a delicate environment where I know what is going on. I do not want an LLM spewing its first draft all over my current branch. While an LLM is ultimately a developer tool, it is one that needs its own IDE to get the feedback it needs to operate effectively.Put another way, we didnt embed goimports into sketch for it to be used by humans but to get Go code closer to compiling using automatic signals so that the compiler can provide better error feedback to the LLM driving it. It might be better to think of sketch.dev as a Go IDE for LLMs.This is all very recent work with a lot left to do, e.g. git integration so we can load existing packages for editing and drop the results on a branch. We also need better test feedback and more console control. (If the answer is to run sed, run sed. Be you the human or the LLM.) We are still exploring, but we're convinced that focusing an environment for a particular kind of programming will give us better results than the generalized tool.David Crawshaw is a co-founder (and former CTO) of Tailscale, lives in the Bay Area, and is building sketch.dev. He has been programming for 30 years and is planning on another 30. 11 Comments
    0 Yorumlar 0 hisse senetleri 141 Views
  • ARSTECHNICA.COM
    NASA defers decision on Mars Sample Return to the Trump administration
    4th and long NASA defers decision on Mars Sample Return to the Trump administration "We want to have the quickest, cheapest way to get these 30 samples back." Stephen Clark Jan 8, 2025 1:44 pm | 24 This photo montage shows sample tubes shortly after they were deposited onto the surface by NASAs Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS This photo montage shows sample tubes shortly after they were deposited onto the surface by NASAs Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor nearly four years, NASA's Perseverance rover has journeyed across an unexplored patch of land on Marsonce home to an ancient river deltaand collected a slew of rock samples sealed inside cigar-sized titanium tubes.These tubes might contain tantalizing clues about past life on Mars, but NASA's ever-changing plans to bring them back to Earth are still unclear.On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the "sky crane" landing system demonstrated on the agency's two most recent Mars rovers. The other option would be to outsource the lander to the space industry.NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency's 15th administrator last month."This is going to be a function of the new administration in order to fund this," said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?The Trump White House is expected to emphasize "results and speed" with NASA's space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billiondouble the previous cost projectionand wouldn't get the Mars specimens back to Earth until 2040.This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission."We want to have the quickest, cheapest way to get these 30 samples back," Nelson said.How much for these rocks?NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL's previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.NASA previously deleted a "fetch rover" from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion. Artist's illustration of SpaceX's Starship approaching Mars. Credit: SpaceX JPL will have a "key role" in both paths for MSR, said Nicky Fox, head of NASA's science mission directorate. "To put it really bluntly, JPL is our Mars center in NASA science."If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.Without MSR, engineers at the Jet Propulsion Laboratory don't have a flagship-class mission to build after the launch of NASA's Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA's Psyche asteroid mission, and it's not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.Ars submitted multiple requests to interview Laurie Leshin, JPL's director, in recent months to discuss the lab's future, but her staff declined.Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively."The main difference is in the landing mechanism," Fox said.To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year's budget, Nelson said.NASA officials didn't identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX's Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martineach with their own lander conceptswere among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon's Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars. This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL's proposed "sky crane" architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL The science community has long identified a Mars Sample Return mission as the top priority for NASA's planetary science program. In the National Academies' most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program's cost should not undermine other planetary science missions.Teeing up for cancellation?That's exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA's planetary science division to institute a moratorium on starting new missions."The decision about Mars Sample Return is not just one that affects Mars exploration," said Curt Niebur, NASA's lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. "Its going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this."Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns."We can wait another year, or we can get started now," Rocket Lab posted on X. "Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031."Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA's recent Mars missions. "We can deliver MSR mission success too," the company said. Rocket Lab's concept for a Mars Sample Return mission. Credit: Rocket Lab Although NASA's deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission."They need to flesh out all of the possibilities of whats required in the engineering for the commercial option," Nelson said.On the program's current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration's goals for space exploration. In an interview with Ars last week, Nelson said he did not want to "put the new administration in a box" with any significant MSR decisions in the waning days of the Biden administration.One source with experience in crafting and implementing US space policy told Ars that Nelson's deferral on a decision will "tee up MSR for canceling." Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.Science and geopoliticsWhether it's with robots or humans, there's a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere."Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system," Fox said. "We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also ... getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system."Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars."The samples that we have taken by Perseverance actually predatethey are older than any of the samples or rocks that we could take here on Earth," Fox said. "So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing."Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.In a statement, the Planetary Society said it is "concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.""It has been more than two years since NASA paused work on MSR," the Planetary Society said. "It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover."We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface."China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA's plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China's mission will scoop up rocks and soil near its landing site."Theyre just going to have a mission to grab and gogo to a landing site of their choosing, grab a sample and go," Nelson said. "That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that theres a race? Of course, people will say that, but its two totally different missions."Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump's NASA transition team."I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return," Nelson said. "I can't imagine that they don't. I don't think we want the only sample return coming back on a Chinese spacecraft."Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 24 Comments
    0 Yorumlar 0 hisse senetleri 131 Views
  • WWW.NEWSCIENTIST.COM
    Sleeping pills disrupt how the brain clears waste
    During sleep, the brain flushes out toxins that accumulate throughout the dayRobert Reader/Getty ImagesSleeping pills might help you doze off, but the sleep you get may not be as restorative. When mice were given zolpidem, a medication commonly found in sleeping pills such as Ambien, it prevented their brains from effectively clearing waste during sleep.Sleep is critical for removing waste from the brain. At night, a clear liquid called cerebrospinal fluid circulates around brain tissues, flushing out toxins through a series of thin tubes known as the glymphatic system. Think of it like a dishwasher that the brain turns on when asleep, says Maiken Nedergaard at the University of Rochester Medical Center in New York. However, the mechanism that pushes fluid through this network wasnt well understood until now. AdvertisementNedergaard and her colleagues implanted optical fibres in the brains of seven mice. By illuminating chemical compounds in the brain, the fibres let them track the flow of blood and cerebrospinal fluid during sleep.They found that as levels of a molecule called norepinephrine (also called noradrenaline) rise, blood vessels in the brain constrict, decreasing the volume of blood and allowing cerebrospinal fluid to rush into the brain. When norepinephrine levels fall, blood vessels expand, pushing cerebrospinal fluid back out. In this way, fluctuations in norepinephrine during non-rapid eye movement (NREM) sleep stimulate blood vessels to act like a pump for the glymphatic system, says Nedergaard.This finding reveals that norepinephrine plays a crucial role in cleaning waste out of the brain. Previous research has shown that, as we sleep, our brains release norepinephrine in a slow, oscillating pattern. These norepinephrine waves occur during NREM, which is a crucial sleep stage for memory, learning and other cognitive functions. Get the most essential health and fitness news in your inbox every Saturday.Sign up to newsletterNext, the researchers treated six mice with zolpidem, a sleep medication commonly sold under the brand names Ambien and Zolpimist. While the mice fell asleep faster than those treated with a placebo, the flow of cerebrospinal fluid in their brains dropped by roughly 30 per cent, on average. In other words, their brain doesnt get cleaned very well, says Nedergaard.Although the experiment tested zolpidem, nearly all sleeping pillsinhibit the production of norepinephrine. This suggests they may interfere with the brains ability to flush out toxins.It is too soon to tell whether these results will translate to humans. Human sleep architecture is still fairly different than a mouse, but we do have the same brain circuit that was studied here, says Laura Lewis at the Massachusetts Institute of Technology. Some of these fundamental mechanisms are likely to apply to us as well.If sleeping pills do interfere with the brains ability to remove toxins during sleep, that means we must develop new sleep medications, says Nedergaard. Otherwise, we risk exacerbating sleep problems, potentially worsening brain health in the process.Journal reference:Cell DOI: 10.1016/j.cell.2024.11.027Topics:
    0 Yorumlar 0 hisse senetleri 151 Views
  • WWW.NEWSCIENTIST.COM
    We thought we knew emperor penguins robots are proving us wrong
    The emperor penguin breeding season is fraught with dangerStefan Christmann/naturepl.comA rover quietly surveys the forbidding icy landscape. Suddenly, it whirrs into life: it has spotted an emperor penguin. With its antenna set to scan, the 90-centimetre-long robot trundles towards the bird, searching for a signal from an RFID chip beneath the penguins skin recording crucial information that may help us finally understand this enigmatic species.The emperor penguin is instantly familiar as the star of countless nature documentaries and the 2005 movie March of the Penguins. This media exposure might give the impression that we have a solid understanding of its biology. We dont. Almost all of that footage was collected from just two breeding colonies on opposite sides of Antarctica, constituting perhaps 10 per cent of the emperor penguin population. For decades, the hundreds of thousands of emperors living elsewhere along the continents coast were virtually unstudied.That situation is now changing. Over the past 15 years, researchers have uncovered more about these birds using new technologies, including satellites that can spot colonies from space and AI-equipped robots to scan them on the ground. I hope were starting to go into a golden age of research, says Daniel Zitterbart at Woods Hole Oceanographic Institution, Massachusetts.Already, the work has revealed subtle differences in the genetics and behaviour of the penguins at different points around the Antarctic coast, and shown that they are surprisingly adaptable to changing conditions. But these discoveries have been made amid rapid warming in the region, which led the US Fish and Wildlife Service to declare emperors a threatened species in 2022.
    0 Yorumlar 0 hisse senetleri 148 Views
  • WWW.BUSINESSINSIDER.COM
    The LA wildfire is ripping through a neighborhood full of A-Listers. Here are the celebrities affected.
    A wildfire has broken out in Los Angeles and is raging through the Pacific Palisades neighborhood."Star Wars" actor Mark Hamill was among the 30,000 people in LA evacuated from their homes.The average house price in the northern LA area is $4.5 million, per Realtor.com data.A wildfire in Los Angeles is tearing through the Pacific Palisades neighborhood, home to A-list actors, including Ben Affleck who bought his $20.5 million mansion there in July.Other A-listers Tom Hanks, Reese Witherspoon, Michael Keaton, Adam Sandler, Miles Teller, and Eugene Levy also live in the neighborhood.The area is between Santa Monica and Malibu in northern Los Angeles, where the average house price is $4.5 million, according to Realtor.com data.The fire started on Tuesday in the Pacific Palisades before spreading west toward the Malibu stretch of the Pacific Coast Highway. On Wednesday, the city of Malibu issued a statement on X advising residents to prepare to evacuate.Among the at least 30,000 LA residents asked to evacuate their homes because of the fire was the "Star Wars" actor Mark Hamill. He said on Instagram that he had left his Malibu home on Tuesday with his wife, Marilou, and their dog, Trixie. He described it as the "most horrific fire since '93."He said: "Evacuated Malibu so last-minute there were small fires on both sides of the road as we approached PCH."Levy told The Los Angeles Times on Tuesday that he got stuck while trying to leave the neighborhood. "The smoke looked pretty black and intense over Temescal Canyon," Levy said. "I couldn't see any flames but the smoke was very dark."Chet Hanks, the son of actors Tom Hanks and Rita Wilson, also mentioned the fire on Tuesday."The neighborhood I grew up in is burning to the ground rn. Pray for the Palisades," he wrote in an Instagram Stories post. Chet Hanks/Instagram Stories On Wednesday morning, the "Halloween" star Jamie Lee Curtis said on Instagram that she was safe but her home in West LA might not be.She wrote: "My community and possibly my home is on fire. My family is safe. Many of my friends will lose their homes. Many other communities as well. There are so many conflicting reports. With all the technology there seems to be very little information. Please post facts! It will help those wondering!"The Oscar-nominated actor James Woods said on X that he and his family safely evacuated from the Palisades but didn't know whether his home "is still standing."On Tuesday, TMZ reported that Spencer Pratt and Heidi Montag, two stars of "The Hills," lost their home in the fire after being evacuated.Actor Kate Beckinsale wrote in an Instagram post on Wednesday that "the whole of the Palisades being destroyed is unthinkably horrific.""My daughter and I lived there for most of her childhood and most of her childhood is gone," she wrote.Beckinsale shared several other posts, including one thanking local firefighters and another sharing information about assembling an emergency bag.The Palisades Charter High School also burned down, according to The Hollywood Reporter. The campus was used for films including "Carrie," "Freaky Friday," "Project X," and MTV's "Teen Wolf" TV series.The premieres for "Wolf Man" and "Unstoppable," scheduled for Tuesday, were canceled.In 2023, a study from the University of California, Irvine, found that California's wildfires had worsened each year over the past two decades.The fire that started on Tuesday quickly spread because of the Santa Ana winds, which created up to 80 mph gusts. The drought in Southern California also exacerbated the situation, creating dangerously dry conditions.In 2018, Kim Kardashian and her ex-husband Kanye West were criticized by fans for hiring private firefighters to protect their $60 million home in the Hidden Hills when the Woolsey Fire burned almost 100,000 acres of land.
    0 Yorumlar 0 hisse senetleri 142 Views
  • WWW.BUSINESSINSIDER.COM
    US special operators are going back to their 'roots' with an eye on China and Russia, senior Pentagon official says
    US special operations forces are shifting their focus after decades of counterterrorism.Competition with China and Russia is reshaping how SOF supports the joint force.A senior Pentagon official said that special operations is also returning to its "roots."A senior Pentagon official said this week that the role ofUS special operationsis changing as the US faces increasing competition and challenges from China and Russia.With the threat of a conflict against a powerful and advancedThe direction of special operations forces (SOF) is adapting to the largest challenges facing the US a rapidly growing Chinese military and Russian state set on expansion by force.Maier said during a conversation with the Center for Strategic and International Studies think tank on Tuesday that SOF is "still doing counterterrorism, crisis response, those have been the persistent missions," but the priority is shifting towards "increasingly where we can support other elements, largely in a support role, for those strategic competition elements."That means playing a big role in solving challenges facing the joint force, like more modern adaptations to using artificial intelligence, as well as the traditional functions of SOF, such as "being that sensor out there and providing the necessary input to decision makers to better understand a situation," noted Maier, who previously led the Pentagon's Defeat-ISIS Task Force overseeing the campaign across Iraq and Syria that relied heavily on American special operators.Special operators are the US military's most highly trained troops, the go-to teams for small raids and secretive missions, but they lack the numbers and firepower to go up against larger conventional forces for long. US special operations forces are supporting the joint force as the US faces strategic competition with China and Russia. US Air Force photo by Senior Airman Lauren Cobin Much of the US' special operations presence in over 80 countries around the world is focused on working closely with foreign militaries, law enforcement, and embassies to keep a finger on the pulse. For the past 20 years, the US has relied on these forces for some the most unconventional and difficult missions, like teaming with partner forces to fight enemies or running shadowy helicopter assaults to kill or capture key leaders.Maier said he views it as both a continuation of the counterterrorism and crisis response that SOF has been doing for decades and also a step back to its origins."We're going back to the proverbial roots of supporting the joint force with some of the hardest problems against peer adversaries," Maier said.With the so-calledSOF has spent over 20 years operating in counterterrorism and unconventional warfare roles, fighting quietly in a variety of environments across the world and maintaining relationships that provide the US with information on tactics of specific groups and deeper understandings of regional and security issues.That role is now changing, albeit just as important. In a 2023 article for the Royal United Services Institute, a British think tank, David Ucko, a professor and expert on irregular warfare, argues that leaders in Washington need to examine how to best use SOF for newer challenges against Russia and China. That includes irregular warfare, which is "highly relevant" to strategic competition with China.But, Ucko notes, special operators fill a particular role in military operations and shouldn't be given missions that other US agencies or groups can also do.One of the deepest challenges these secretive forces face is the widening surveillance by spy satellites and recon drones. While special operations has often led the fight on counterterrorism, the shift towards peer adversary competition is changing that focus. Master Sgt. Timothy Lawn/US Army Central SOF missions often have multiple objectives like foreign internal defense and unconventional warfare; special operators can, for example, help boost a US ally's defense tactics against a foreign aggressor, such as Taiwan and China.Allied special forces played critical roles in World War II, shaped by the need for specialization in unconventional missions and innovative tactics, such as sabotage behind enemy lines and disrupting German supply lines. In North Africa, British Special Air Services and Commonwealth Long Range Desert Group commandos aided in disrupting Axis troops deployments and airpower.During the Cold War, special operators played a role in deterring the Soviet Union's influence, maintaining presence in and relationships with Western Europe and other areas. Special operations forces often focus on irregular or unconventional operations best suited for small units of highly trained operators. US Navy photo by Petty Officer 3rd Class Steven D. Patzer All of that historical context is informing SOF's priorities today, as the US faces similar challenges against China and Russia and their activities across the world, Maier said."The differences, I think, here are some of the fundamental changes in adversaries' ability to access technology," he added, and their ability to "use different types of techniques than maybe we saw in the Cold War."Both China and Russia are actively engaged in bolstering their irregular warfare tactics, including reconnaissance, disinformation, electronic warfare, cyberspace and space efforts, and psychological warfare.In its report on China's military growth over the course of 2023, the Pentagon noted that China is expanding its capabilities towards a vision of future conflict it calls "intelligentized warfare" focused on AI, data, and controlling information spaces.Other elements, such as China's campaigns in Taiwan to influence domestic politics and opinions on unification, are also notable.
    0 Yorumlar 0 hisse senetleri 123 Views
  • WWW.BUSINESSINSIDER.COM
    Shocking videos show Palisades Fire burning out of control in California
    Destructive brush fires are erupting across California as firefighters say there's "no possibility" of containment. The Palisades, Eaton, Hurst, and Woodley fires come as powerful winds slam northwest Los Angeles.Read the original article on Business Insider
    0 Yorumlar 0 hisse senetleri 136 Views
  • WWW.VOX.COM
    The unusually strong force behind the apocalyptic fires in Los Angeles
    Sustained powerful winds reaching nearly 100 miles per hour are driving fast-moving wildfires near Los Angeles, spewing smoke, destroying homes, closing roads, and forcing thousands of people to evacuate. The Palisades fire along the coast near the Santa Monica mountains has burned more than 5,000 acres as of Wednesday afternoon. The Eaton fire near Pasadena has now torched at least 2,200 acres. The blazes have killed at least two people and destroyed more than 1,000 structures. Other smaller fires are also burning in the region. These blazes are stunning in their scale and speed, jumping from ignition to thousands of acres in a day, but theyre hardly unexpected. Fire forecasters have been warning since the beginning of the year that conditions were ripe for massive infernos, particularly in Southern California. For January, above normal significant fire potential is forecast across portions of Southern California, according to a National Interagency Fire Center (NIFC) bulletin on January 2. This was an exceptionally well-predicted event from a meteorological and fire-predictive services perspective, Daniel Swain, a climate scientist at the University of California Los Angeles, said Wednesday during a livestream. The winter months are typically when Southern California quenches its thirst with rainfall, but the past few weeks have been unusually dry, and little snowfall has accumulated in the surrounding mountains. The NIFC also noted that temperatures were an impressive two to six degrees [Fahrenheit] above normal in most areas in December, allowing vegetation like grasses and chaparral to readily dry out and serve as fuel. On top of this, the Santa Ana winds, Southern Californias seasonal gusts, were unusually strong. They typically blow from the northeast toward the coast in the wintertime, but this year, an unusually warm ocean and a meandering jet stream are giving these gales an additional speed boost, like pointing a hair dryer at Los Angeles. Firefighters are working desperately to corral the flames and keep them away from peoples homes, but theres little they can do to halt the combination of ample fuel, dry weather, and high winds, which are poised to continue. It will take another force of nature to quell this one. Until widespread rains occur, this risk will continue, according to the NIFC bulletin. Wildfires are a natural part of the landscape in California, but the danger they pose to the region is growing because more people are living in fire-prone areas. That increases the likelihood of igniting a blaze and the scale of the damage that occurs when a fire inevitably erupts. Californias growing wildfire threat has rocked the states insurance industry and forced regulators to allow insurers to price in the risk of worsening future catastrophes. At the same time, global average temperatures are rising due to climate change, which can prime more of the landscape to burn. It will take a concerted effort on many fronts to mitigate the wildfire threat, including using more fire-resistant building materials, performing controlled burns to reduce fuels, changing where people live, improving forecasting, pricing insurance in line with the actual disaster risk, and reducing greenhouse gas emissions that are driving climate change. But in the meantime, the dangers from fires in Southern California are likely to get worse.A fast-moving brushfire in a Los Angeles suburb burned buildings and sparked evacuations Tuesday as life-threatening winds whipped the region. More than 200 acres was burning in Pacific Palisades, a upscale spot with multimillion-dollar homes in the Santa Monica Mountains, shuttering a key highway and blanketing the area with thick smoke. David Swanson/AFP via Getty ImagesWhat are the Santa Ana winds? Why are they so powerful this year?Parts of California regularly experience persistent high winds during certain times of year. The northern part of the state, including the San Francisco Bay Area, tends to see high winds in the spring and fall known as the Diablo winds. Southern Californias Santa Ana winds often arise in the winter months. This is not a typical Santa Ana, but this is the time of year when you expect it, Swain said. The mechanisms behind the Santa Ana and Diablo winds are similar: Cool air from inland mountains rolls downhill toward the coasts. That air compresses as it moves to lower altitudes and squeezes between canyons, heating up and drying out, similar to a bicycle pump. But there are several factors that may be worsening these gusts right now.One is that the band of the Pacific Ocean near Southern California remains unusually warm following two years of record-high temperatures all over the world that triggered underwater heat waves. High temperatures in the ocean can bend the jet stream. This is a narrow band of fast-moving air at a high altitude that snakes across the planet and shapes the weather below. As it meanders, it can hold warm air under high pressure in place, allowing heat to accumulate closer to the surface. When high pressure settles over inland areas like the Great Basin northeast of Los Angeles, it starts driving air over the mountains and toward the coast. Again, wildfires are a natural and vital mechanism in the ecosystem in Southern California. They help clear decaying vegetation and restore nutrients to the soil. But people are making the destruction from wildfires far worse. The majority of wildfires in the US are ignited by humans careless campfires, sparks from machinery, downed power lines but there are also natural fire starters like dry lightning storms and on rare occasions, spontaneous combustion of decaying vegetation and soil. The ignition sources of the current fires around Los Angeles arent known yet. The population in the region is also expanding, although the growth rate has recently slowed down. More people in the area means more property, and in Southern California, that property can be quite expensive. As the fires move toward populated areas, they can do a lot of damage. I do expect it is plausible that the Palisades fire in particular will become the costliest on record, Swain said. The weather this year has also left abundant vegetation in the region that has desiccated in the warm, dry air. And of course, humans are heating up the planet by burning fossil fuels and that is enhancing some of the raw ingredients for dangerous fires. Ample fuel plus high wind in unusually dry weather near a major population center have converged to create an extraordinary and dangerous spate of wildfires. Whats the role of climate change?Many factors have to converge to start a massive wildfire, and the variables arent all straightforward. In recent years, California has been ping-ponging between extremely dry and wet years. Thats had a strong impact on the vegetation in Southern California. Unlike the forests in the northern part of the state that grow over the course of decades, the amount of grass and brush around Los Angeles can shift widely year to year depending on precipitation. There is a very high degree of background variability, Swain said. The key thing to pay attention to is the sequence of extreme weather. Last winter, the Los Angeles area was soaked in torrential downpours that set new rainfall records. The deluge helped irrigate a bumper crop of grasses and shrubs in the area. The region then experienced some of its all-time hottest temperatures followed by one the driest starts to winter ever measured. These swings between extreme rainfall and drought have been dubbed weather whiplash, and climate scientists expect these shifts to become more common along the West Coast, and that could increase the threat of major blazes. Its not just that drier conditions are perpetually more likely in a warming climate, its that this oscillation back and forth between states is something that is particularly consequential for wildfire risk in Southern California, Swain said.Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
    0 Yorumlar 0 hisse senetleri 123 Views
  • WWW.VOX.COM
    Trump asks the Supreme Court to place him even further above the law
    On Wednesday, President-elect Donald Trump asked the Supreme Court to halt the criminal proceeding against him in New York state court.Trump was convicted of 34 felony counts of falsifying business records, related to hush money payments made to an adult film actress during the 2016 presidential election, in New York last May. He is currently scheduled to be sentenced on Friday, and that hearing will move forward unless the Supreme Court intervenes.Realistically, the immediate stakes in this suit, which is known as Trump v. New York, are low: Regardless of what the Court decides to do, Trump is unlikely to face any punishment in the case. Justice Juan Merchan, the New York judge presiding over the case, recently signaled that he would sentence Trump to unconditional discharge meaning that Trump, though found guilty, will not face imprisonment, probation, or a fine. And the Supreme Courts Republican majority already gave Trump sweeping immunity from prosecution for crimes he committed using his official presidential powers last July. That case involved allegedly criminal actions Trump took while he was president, so the Court has not formally ruled on whether he can be prosecuted for crimes he committed before taking office, like the falsifying of business records. However, the July decision does have some language limiting the evidence that can be used against Trump in criminal proceedings unrelated to his official conduct. Still, the case could have some long-term effects. Trump seeks to expand the already quite broad immunity from legal consequences the Republican justices gave him last July. Among other things, the immunity decision in Trump v. United States (2024) establishes that Trump cannot be prosecuted if he illegally orders the Justice Department to bring sham prosecutions against his political enemies. This new case, by contrast, involves crimes Trump committed before he won election for the first time. So a decision in Trumps favor could extend his legal immunity even further.Its easy to imagine this Court ruling in Trumps favor once again. Many of Trumps arguments in his latest brief closely track the reasoning of the July decision. And the sort of judge who would sign on to that decision is unlikely to be concerned about giving too much legal immunity to Trump.What are the specific legal issues in Trump v. New York?Asking whether the doctrine of presidential immunity requires New York to halt its current case against Trump is like asking whether your daughters imaginary friend likes ice cream. The doctrine that former presidents are immune from criminal prosecutions is that imaginary friend. It did not exist until 2024 why else would President Gerald Ford have needed to pardon former President Richard Nixon in 1974, for example, if Nixon was already immune? and it has no basis in constitutional text.As a creation of the Supreme Court, the immunity doctrine can say whatever the majority of justices want it to, and so, it is up to the personal whims of the justices as to whether it applies in the New York case.That said, Trumps latest brief to the justices, which is authored by Solicitor General-nominee John Sauer, makes a strong case that, if you treat the Courts July decision as legitimate, then that decision requires New York to halt its sentencing proceeding against Trump.Broadly speaking, Sauer claims that allowing the sentencing proceeding to happen on schedule would violate the Trump decision in three ways. First, Merchan permitted testimony from White House advisers, as well as other evidence that was arguably produced while Trump was carrying out his official actions as president. The Republican justices July decision held that former presidents have broad immunity from prosecution for their official actions in office, and it also held that a prosecutor may not invite the jury to examine acts for which a President is immune from prosecution.Second, Sauer argues that Trump is immune from any criminal proceedings while he is president-elect. This is the weakest of Sauers three arguments. In the July decision, the Court said the Justice Department has long recognized that the separation of powers precludes the criminal prosecution of a sitting President. But even if the Court were to agree with the Justice Department on this point, a president-elect is not yet a sitting president. That said, its unclear that there will be many future ramifications if the Court sides with Trump on this point. Any decision would affect only Trump or a future would-be president convicted of crimes. Trump is the only president in American history to be convicted of a crime, much less to be convicted and then reelected to the presidency. Finally, Sauer argues that all remaining criminal proceedings against Trump must be halted while the incoming president challenges his conviction in New Yorks appeals courts. This is probably Sauers strongest argument, thanks to some language in the July opinion that favors Trumps current argument.The July decision held that the essence of immunity is its possessors entitlement not to have to answer for his conduct in court, And the decision also suggested that questions of immunity are reviewable before trial because the essence of immunity is the entitlement not to be subject to suit. All of this suggests that Trump cannot be forced to answer for his criminal actions in New York state court or anywhere else until the question of whether he is immune from prosecution is resolved on appeal.There are, of course, reasonable arguments rebutting Sauers claims. Merchan argued, for example, that even if testimony from presidential aides should not have been admitted at trial, this error was harmless in light of the overwhelming evidence of guilt. But, realistically, the question of whether to delay Trumps sentencing proceeding on Friday will be decided by the same six Republican officials who recently invented a new legal doctrine shielding Trump from criminal prosecutions.Sauer, in other words, does not need to make a good legal argument for delaying the hearing. He only needs to make an argument that is good enough to persuade six officials who have already bent over backward to protect the leader of their political party.Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
    0 Yorumlar 0 hisse senetleri 123 Views