• WWW.TECHSPOT.COM
    BMW Panoramic iDrive turns the entire windshield into a display
    Forward-looking: BMW has unveiled its groundbreaking new Panoramic iDrive system, with the centerpiece being an eye-popping 3D heads-up display that spans the entire windshield. If you thought Tesla's minimalist interior was sleek, wait until you catch a glimpse of this. Gone is the traditional gauge cluster in front of the steering wheel. Instead, everything is projected directly into the driver's line of sight through the windshield.This includes speed, driving assistance information, stoplights, road signs, navigational directions, battery levels, and more. Everything is customizable, allowing the driver to display only the information they want. The navigation path even turns green when driver assistance is engaged, seamlessly blending technology with directions.Frank Weber, BMW's chief technology officer, describes this setup as an augmented reality layer that keeps the driver connected to the road.The company told The Verge that, as higher levels of autonomous driving become available, integrating navigation instructions with driver assistance data is a natural progression. They also mentioned that customer feedback played a crucial role in shaping many of the intelligent windshield display features.The updates don't stop at the windshield. BMW has also redesigned the steering wheel, now featuring haptic buttons that illuminate based on different settings.Assisting the windshield interface is a new rhombus-shaped center touchscreen, which users can directly interact with. It offers a highly customizable interface where users can prioritize their most-used apps (BMW refers to these as "pixels") for easy access. BMW is also considering an app store for additional features and customizations. // Related StoriesThe software powering the system is BMW Operating System X, which the company claims is developed "100% in-house" and based on the Android Open Source Project.Of course, no tech release in 2025 is complete without a touch of AI. The iDrive system uses it to learn drivers' habits and behaviors, automatically surfacing relevant apps and settings. For example, if a driver frequently takes a particular route home and engages sport mode, those settings will be proactively queued.Large language models also make voice commands more natural and conversational, according to BMW. Rather than using specific keywords, drivers can simply say something like "find a charging station near the grocery store."This ambitious new interior design will debut in BMW's upcoming X-Class electric SUV by late 2025, with other vehicles built on the new "Neue Klasse" platform following suit.Such a dramatic change may polarize fans of a company with decades of legacy interiors featuring classic dials and gauges. It will also be interesting to see how BMW addresses safety considerations, which have become a point of scrutiny for EV companies moving to full touchscreen interfaces. In fact, Euro NCAP is introducing new guidelines in 2026 that will require important vehicle functions to use physical buttons for multiple controls to achieve a five-star safety rating.
    0 Comments 0 Shares 29 Views
  • WWW.DIGITALTRENDS.COM
    Samsung will soon let you rent its phones instead of buying them
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Looking to buy the Samsung Galaxy S25 but dont have the money to pay for it all upfront? Samsung may eventually allow you to rent out the upcoming model and other Samsung Galaxy phones instead of purchasing them right off the bat.Rumors of the smartphone subscription service came out of CES 2025, where Samsung CEO and Vice Chairman Han Jong-hee said that the company plans to introduce the subscription service at this years Galaxy Unpacked event in two weeks, according to a report by ETNews. Jong-hee said that the service, dubbed the AI Subscription Club, will launch in February, two months after it began offering it to customers in South Korea.Recommended VideosThe AI part of the name is a mistranslation, but the name is likely not final. The smartphone subscription service will allow customers to pay a subscription fee for any phone they want and try it out before they decide to pay the full amount. Its unclear how much the monthly fees will be or if there will be subscription tiers similar to those of streaming services like Netflix and Disney+. However, the subscription service will apply to tablets as well.Please enable Javascript to view this contentSamsungs hardware subscription program bears some resemblance to the one Apple had been attempting to develop alongside the Apple Pay group since 2022, making plans to launch it by the end of that year. The project then got delayed because of software issues and regulatory concerns and was ultimately canceled at the end of last year. Six months earlier, in June 2024, the company announced Apple Pay Later would be shutting down after it integrated third-party loan services into iOS 18. Google tried to offer a subscription service for Pixel phones called Google Pixel Pass, which also offered access to YouTube Premium and Google Play Pass, but it ended in 2023.Editors Recommendations
    0 Comments 0 Shares 29 Views
  • WWW.WSJ.COM
    OpenAI CEO Sam Altman Denies Sexual-Abuse Claims Made by Sister
    Ann Altmans lawsuit says the alleged abuse occurred at the familys home in Missouri when she and Sam Altman were children.
    0 Comments 0 Shares 29 Views
  • WWW.WSJ.COM
    Yojimbo and Sanjuro: A Solitary Samurai Born of Collaboration
    Director Akira Kurosawa and star Toshiro Mifune did some of their finest work together with this pair of films from the early 1960s, and the Criterion Collection has now released them in sparkling 4K restorations.
    0 Comments 0 Shares 28 Views
  • WWW.WSJ.COM
    The Last Kilo Review: Muchacho Money
    The gangs wealth exceeded $2 billion at its peak. Members spent less time threatening rivals than meeting in attorneys offices.
    0 Comments 0 Shares 30 Views
  • WWW.WSJ.COM
    Look Up by Ringo Starr Review: A Beatles Twangy Turn
    The British musician highlights his longstanding love of country music on an album produced by T Bone Burnett and featuring Alison Krauss and Billy Strings.
    0 Comments 0 Shares 28 Views
  • ARSTECHNICA.COM
    After embarrassing blunder, AT&T promises bill credits for future outages
    AT&T's complicated promise After embarrassing blunder, AT&T promises bill credits for future outages If you lost service but only 9 cell towers went down, you won't get a bill credit. Jon Brodkin Jan 8, 2025 3:10 pm | 2 Credit: Getty Images | Bloomberg Credit: Getty Images | Bloomberg Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAT&T, following last year's embarrassing botched update that kicked every device off its wireless network and blocked over 92 million phone calls, is now promising full-day bill credits to mobile customers for future outages that last at least 60 minutes and meet certain other criteria. A similar promise is being made to fiber customers for unplanned outages lasting at least 20 minutes, but only if the customer uses an AT&T-provided gateway.The "AT&T Guarantee" announced today has caveats that can make it possible for a disruption to not be covered. AT&T says the promised mobile bill credits are "for wireless downtime lasting 60 minutes or more caused by a single incident impacting 10 or more towers."The full-day bill credits do not include a prorated amount for the taxes and fees imposed on a monthly bill. The "bill credit will be calculated using the daily rate customer is charged for wireless service only (excludes taxes, fees, device payments, and any add-on services," AT&T said. If an outage lasts more than 24 hours, a customer will receive another full-day bill credit for each additional day.If only nine or fewer AT&T towers aren't functioning, a customer won't get a credit even if they lose service for an hour. The guarantee kicks in when a "minimum 10 towers [are] out for 60 or more minutes resulting from a single incident," and the customer "was connected to an impacted tower at the time the outage occurs," and "loses service for at least 60 consecutive minutes as a result of the outage."AT&T will decide whether outage is really an outageThe guarantee "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, or outages caused by third parties." AT&T says it will determine "in its sole discretion" whether the disruption is "a qualifying" network outage."Consumers will automatically receive a bill credit equaling a full day of service and we'll reach out to our small business customers with options to help make it right," AT&T said. When there's an outage, AT&T said it will "notify you via e-mail or SMS to inform you that you've been impacted. Once the interruption has been resolved, we'll contact you with details about your bill credit." If AT&T fails to provide the promised credit for any reason, customers will have to call AT&T or visit an AT&T store.To qualify for the similar fiber-outage promise, "customers must use AT&T-provided gateways," the firm said. There are other caveats that can prevent a home Internet customer from getting a bill credit. AT&T said the fiber-outage promise "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, loss of service due to downed or cut cable wires at a customer residence, issues with wiring inside customer residence, and power outages at customer premises. Also excludes outages resulting from planned maintenance."AT&T notes that some residential fiber customers in multi-dwelling units "have an account with AT&T but are not billed by AT&T for Internet service." In the case of outages, these customers would not get bill credits but would be given the option to redeem a reward card that's valued at $5 or more.Botched network updateIn February 2024, AT&T caused a major outage by botching a network update and took over 12 hours to fully restore service. At the time, AT&T said it was automatically issuing credits to affected customers "for the average cost of a full day of service.""All voice and 5G data services for AT&T wireless customers were unavailable, affecting more than 125 million devices, blocking more than 92 million voice calls, and preventing more than 25,000 calls to 911 call centers," the Federal Communications Commission said in a report after a months-long investigation into the incident.The FCC report said the nationwide outage began three minutes after "AT&T Mobility implemented a network change with an equipment configuration error." This error caused the AT&T network "to enter 'protect mode' to prevent impact to other services, disconnecting all devices from the network."The FCC found various problems in AT&T's processes that increased the likelihood of an outage and made recovery more difficult than it should have been. The agency described "a lack of adherence to AT&T Mobility's internal procedures, a lack of peer review, a failure to adequately test after installation, inadequate laboratory testing, insufficient safeguards and controls to ensure approval of changes affecting the core network, a lack of controls to mitigate the effects of the outage once it began, and a variety of system issues that prolonged the outage once the configuration error had been remedied."AT&T said it implemented changes to prevent the same problem from happening again. The company could face punishment, but it's less likely to happen under Trump's pick to chair the FCC, Brendan Carr, who is taking over soon. The Biden-era FCC compelled Verizon Wireless to pay a $1,050,000 fine and implement a compliance plan because of a December 2022 outage in six states that lasted one hour and 44 minutes.An AT&T executive told Reuters that the company has been trying to regain customers' trust over the past few years with better offers and product improvements. "Four years ago, we were losing share in the industry for a significant period of time... we knew we had lost our customers' trust," Reuters quoted AT&T Executive VP Jenifer Robertson as saying in an article today.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 2 Comments
    0 Comments 0 Shares 29 Views
  • ARSTECHNICA.COM
    How I program with LLMs
    learning the machine How I program with LLMs Generative models can be powerfully usefulif you're willing to adapt your approach. David Crawshaw Jan 8, 2025 2:51 pm | 11 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThis piece was originally published on David Crawshaws blog and is reproduced here with permission.This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them. The result has been that I now regularly use LLMs while working, and I consider their benefits to be net-positive on my productivity. (My attempts to go back to programming without them are unpleasant.)Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev. Its very early, but so far, the experience has been positive.BackgroundI am typically curious about new technology. It took very little experimentation with LLMs for me to want to see if I could extract practical value. There is an allure to a technology that can (at least some of the time) craft sophisticated responses to challenging questions. It is even more exciting to watch a computer attempt to write a piece of a program as requested and make solid progress.The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap. Having the Internet all the time was astonishing and felt like the future. Probably far more to me in that moment than to many who had been on the Internet longer at universities because I was immediately dropped into high Internet technology: web browsers, JPEGs, and millions of people. Access to a powerful LLM feels like that.So I followed this curiosity to see if a tool that can generate something mostly not wrong most of the time could be a net benefit in my daily work. The answer appears to be "yes"generative models are useful for me when I program. It has not been easy to get to this point. My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are useless. But as I have been asked more than once how I can possibly use them effectively, this post is my attempt to describe what I have found so far.OverviewThere are three ways I use LLMs in my day-to-day programming:Autocomplete. This makes me more productive by doing a lot of the more-obvious typing for me. It turns out that the current state of the art can be improved on here, but thats a conversation for another day. Even the standard products you can get off the shelf are better for me than nothing. I convinced myself of that by trying to give them up. I couldn't go a week without getting frustrated by how much mundane typing I had to do before having a FIM model. This is the first place to experiment.Search. If I have a question about a complex environment, say how do I make a button transparent in CSS, I will get a far better answer asking any consumer-based LLMo1, sonnet 3.5, etc.than I do using an old-fashioned web search engine and trying to parse the details out of whatever page I land on. (Sometimes the LLM is wrong. So are people. The other day, I put my shoe on my head and asked my two-year-old what she thought of my hat. She dealt with it and gave me a proper scolding. I can deal with LLMs being wrong sometimes, too.)Chat-driven programming. This is the hardest of the three. This is where I get the most value of LLMs, but it's also the one that bothers me the most. It involves learning a lot and adjusting how you program, and on principle, I dont like that. It requires at least as much messing about to get value out of LLM chat as it does to learn to use a slide rule, with the added annoyance that it is a non-deterministic service that is regularly changing its behavior and user interface. Indeed, the long-term goal in my work is to replace the need for chat-driven programming to bring the power of these models to a developer in a way that is not so off-putting. But as of now, I am dedicated to approaching the problem incrementally, which means figuring out how to do best with what we have and improve it.As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.The rest of this is about extracting value from chat-driven programming.Why use chat at all?Let me try to motivate the skeptical. A lot of the value I get out of chat-driven programming is that I reach a point in the day when I know what needs to be written, I can describe it, but I dont have the energy to create a new file, start typing, and then start looking up the libraries I need. (Im an early-morning person, so this is usually any time after 11 am for me, though it can also be any time I context-switch into a different language/framework/etc.) LLMs perform that service for me in programming. They give me a first draft with some good ideas and several of the dependencies I needand often some mistakes. Often, I find fixing those mistakes is a lot easier than starting from scratch.This means chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days, I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning. If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I dont think any of my observations here will be useful to you.Chat-based LLMs do best with exam-style questionsGive an LLM a specific objective and all the background material it needs so it can craft a well-contained code review packet and expect it to adjust as you question it. There are two major elements to this:Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. This is why I have had little success with chat inside my IDE. My workspace is often messy, the repository I am working on is by default too large, and it is filled with distractions. One thing humans appear to be much better than LLMs at (as of January 2025) is not getting distracted. That is why I still use an LLM via a web browserbecause I want a blank slate on which to craft a well-contained request.Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. You can ask an LLM to do things you would never ask a human to do. Rewrite all of your new tests introducing an <intermediate concept designed to make the tests easier to read> is an appalling thing to ask a human; youre going to have days of tense back-and-forth about whether the cost of the work is worth the benefit. An LLM will do it in 60 seconds and not make you fight to get it done. Take advantage of the fact that redoing work is extremely cheap.The ideal task for an LLM is one where it needs to use a lot of common libraries (more than a human can remember, so it is doing a lot of small-scale research for you), working to an interface you designed, or making it produce a small interface you can verify as sensible quickly, and it can write readable tests. Sometimes this means choosing the library for it if you want something obscure (though with open source code, LLMs are quite good at this).You always need to pass an LLMs code through a compiler and run the tests before spending time reading it. They all produce code that doesnt compile sometimes (always making errors I find surprisingly humanevery time I see one, I think, "There but for the grace of God go I.") The better LLMs are very good at recovering from their mistakes; often, all they need is for you to paste the compiler error or test failure into the chat, and they fix the code.Extra code structure is much cheaperThere are vague tradeoffs we make every day around the cost of writing, the cost of reading, and the cost of refactoring code. Lets take Go package boundaries as an example. The standard library has a package net/http that contains some fundamental types for dealing with wire format encoding, MIME types, etc. It contains an HTTP client and an HTTP server. Should it be one package or several? Reasonable people can disagree! So much so that I do not know if there is a correct answer today. What we have works; after 15 years of use, it is still not clear to me that some other package arrangement would work better.The advantages of a larger package include centralized documentation for callers, easier initial writing, easier refactoring, and easier sharing of helper code without devising robust interfaces for them (which often involves pulling the fundamental types of a package out into yet another leaf package filled with types). The disadvantages include the package being harder to read because many different things are going on (try reading the net/http client implementation without tripping up and finding yourself in the server code for a few minutes), or it being harder to use because there is too much going on in it. For example, I have a codebase that uses a C library in some fundamental types, but parts of the codebase need to be in a binary widely distributed to many platforms that does not technically need the C library, so it has more packages than you might expect in the codebase isolating the use of the C library to avoid cgo in the multi-platform binary.There are no right answers here. Instead, we are trading off different types of work that an engineer will have to do (upfront and ongoing). LLMs influence those tradeoffs:As LLMs do better with exam-style questions, more and smaller packages make it easier to give a complete yet isolated context for a piece of work. This is true for humans, too, which is why we use packages at all, but we trade off package size against the extra typing/plumbing/filing to make more readable code. With an LLM both doing and benefiting from a big chunk of that extra work, the tradeoff shifts. (As a bonus, we humans get more readable code!)Smaller and more numerous packages can be compiled and tested independently of unrelated code. This is helpful for the LLM dev cycle, as it means changes can be compiled and tested before other dependent packages are refactored, and the package has a simpler test environment.An exampleLet me work an example to combine a few of the discussed ideas:Write a reservoir sampler for the quartiles of floats.First off is package structure. Were I doing this before LLMs, I would have chosen to have some sort of streamstat package that contained several algorithms, maybe one per file. This does not seem to be a unique opinion; here is an open source quantile package following that model. Now, I want just this one algorithm in its own package. Other variants or related algorithms can have their own package.Next up, what do we get from an LLM? The first pass is not bad. That prompt, with some details about wanting it in Go, got me quartile_sampler.go:// QuartileSampler maintains a reservoir sample to estimate quartiles of a stream // of float64 values. It uses reservoir sampling to maintain a representative // sample when processing large streams of data. type QuartileSampler struct { reservoir []float64 seen int maxSize int rng *rand.Rand }// NewQuartileSampler creates a new sampler that maintains up to maxSize samples. // A larger maxSize provides more accurate estimates but uses more memory. func NewQuartileSampler(maxSize int) *QuartileSampler { }The core interface is good, too:// Add adds a new value to the sampler. func (qs *QuartileSampler) Add(value float64) { }// Quartiles returns the estimated 25th, 50th (median), and 75th percentiles // of the data seen so far. Returns (0,0,0) if no data has been added. func (qs *QuartileSampler) Quartiles() (q1, median, q3 float64) { if len(qs.reservoir) == 0 { return 0, 0, 0 } // Make a copy to avoid modifying the reservoir sorted := make([]float64, len(qs.reservoir)) copy(sorted, qs.reservoir) slices.Sort(sorted) n := len(sorted) q1 = percentile(sorted, 0.25) median = percentile(sorted, 0.50) q3 = percentile(sorted, 0.75) return q1, median, q3 }Great! There are also tests.An aside: this may be the place to stop. Sometimes I use LLM codegen as a form of specialized search. E.g., Im curious about reservoir sampling, but want to see how the algorithm would be applied under some surprising constraintfor example, time-windowed sampling. Instead of doing a literature search, I might amend my prompt for an implementation that tracks freshness. (I could also ask it to include references to the literature in the comments, which I could manually check to see if its making things up or if theres some solid research to work from.)I often spend 60 seconds reading some generated code, see an obvious trick I hadnt thought of, then throw it away and start over. Now I know the trick is possible. This is why it is so hard to attribute value generated by LLMs. Yes sometimes it makes bad code, gets stuck in a rut, makes up something impossible (it hallucinated a part of the monaco API I wish existed the other day), and wastes my time. It can also save me hours by pointing out something relevant I dont know.Back to the code. Fascinatingly, the initial code produced didnt compile. In the middle of the Quartiles implementation, there was the line:n := len(sorted)That is a fine line; sorted is a slice defined a few lines earlier. But the value is never used, so gopls (and the Go compiler if you run go build) immediately says:declared and not used: nThis is a very easy fix. If I paste the error back into the LLM, it will correct it. Though in this case, as Im reading the code, its quite clear to me that I can just delete the line myself, so I do.Now the tests. I got what I expected. In quartile_sampler_test.go:func TestQuartileSampler(t *testing.T) { tests := []struct { name string data []float64 wantQ1 float64 wantMed float64 wantQ3 float64 epsilon float64 // allowed error for floating point comparison }{ { name: "basic sequence", data: []float64{1, 2, 3, 4, 5, 6, 7}, wantQ1: 2, wantMed: 4, wantQ3: 6, epsilon: 0.1, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { qs := NewQuartileSampler(100) // Use large reservoir for accurate testing for _, v := range tt.data { qs.Add(v) } q1, med, q3 := qs.Quartiles() if math.Abs(q1-tt.wantQ1) > tt.epsilon { t.Errorf("Q1 = %v, want %v (%v)", q1, tt.wantQ1, tt.epsilon) } if math.Abs(med-tt.wantMed) > tt.epsilon { t.Errorf("Median = %v, want %v (%v)", med, tt.wantMed, tt.epsilon) } if math.Abs(q3-tt.wantQ3) > tt.epsilon { t.Errorf("Q3 = %v, want %v (%v)", q3, tt.wantQ3, tt.epsilon) } }) }}Exactly the sort of thing I would write! I would run some cases through another implementation to generate expected outputs and copy them into a test like this. But there are two issues with this.The first is the LLM did not run these numbers through another implementation. (To the best of my knowledge. When using a sophisticated LLM service, it is hard to say for sure what is happening behind the scenes.) It made them up, and LLMs have a reputation for being weak at arithmetic. So this sort of test, while reasonable for a human to write because we base it on the output of another toolor, if we are particularly old-school, do some arithmetic ourselvesis not great from an LLM.The second issue is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable tradeoff; there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm.Lets ask for an even better test.In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test, too.This got us some new test code:// referenceQuartiles calculates the exact quartiles for a slice of float64 values // using linear interpolation, matching the behavior expected from the sampler. func referenceQuartiles(data []float64) (q1, median, q3 float64) { }// compareQuartiles checks if two sets of quartiles are within epsilon of each other. // Returns true if they match within the tolerance, false otherwise. func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool { }// checkQuartiles is a test helper that compares sampler output against the reference // implementation and reports any differences. func checkQuartiles(t *testing.T, data []float64, epsilon float64) { t.Helper() // Get reference values wantQ1, wantMed, wantQ3 := referenceQuartiles(data) // Get sampler values using a large reservoir for accuracy qs := NewQuartileSampler(1000) for _, v := range data { qs.Add(v) } gotQ1, gotMed, gotQ3 := qs.Quartiles() if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) { t.Errorf("Quartiles mismatch:\ngot (q1=%v, med=%v, q3=%v)\nwant (q1=%v, med=%v, q3=%v)\nepsilon=%v", gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon) } }The original test from above has been reworked to to use checkQuartiles, and we have something new:func FuzzQuartileSampler(f *testing.F) { // Add some seed corpus f.Add([]float64{1, 2, 3, 4, 5}) f.Fuzz(func(t *testing.T, data []float64) { // Use a larger epsilon for fuzzing since we might get more extreme values checkQuartiles(t, data, 0.2) }) }This is fun because it's wrong. My running gopls tool immediately says:fuzzing arguments can only have the following types: string, bool, float32, float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, []bytePasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a func(t *testing.T, data []byte) function that uses math.Float64frombits to extract floats from the data slice. Interactions like this point us toward automating the feedback from tools; all it needed was the obvious error message to make solid progress toward something useful. I was not needed.Doing a quick survey of the last few weeks of my LLM chat history shows (which, as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80 percent of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time, it can completely resolve the issue without me saying anything of note. I am just acting as the messenger.Where are we going? Better tests, maybe even less DRYThere was a programming movement some 25 years ago focused on the principle dont repeat yourself. As is so often the case with short snappy principles taught to undergrads, it got taken too far. There is a lot of cost associated with abstracting out a piece of code so it can be reused; it requires creating intermediate abstractions that must be learned, and it requires adding features to the factored-out code to make it maximally useful to the maximum number of people, which means we depend on libraries filled with useless distracting features.The past 1015 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code. It is far less common for me to write on a code review, This isnt worth it, separate the implementations. (Which is fortunate, because people really dont want to hear things like that after they have done all the work.) Programmers are getting better at tradeoffs.What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didnt have the hours to build properly. You can spend a lot more time writing tests to be readable because the LLM is not sitting there constantly thinking, It would be better for the company if I went and picked another bug off the issue tracker than doing this. So the tradeoff shifts in favor of having more specialized implementations.The place where I expect this to be most visible is language-specific REST API wrappers. Every major company API comes with dozens of these (usually low-quality) wrappers written by people who arent actually using their implementations for a specific goal and are instead trying to capture every nook and cranny of an API in a large and complex interface. Even when it is done well, I have found it easier to go to the REST documentation (usually a set of curl commands) and implement a language wrapper for the 1 percent of the API I actually care about. It cuts down the amount of the API I need to learn upfront, and it cuts down how much future programmers (myself) reading the code need to understand.For example, as part of my recent work on sketch.dev, I implemented a Gemini API wrapper in Go. Even though the official wrapper in Go has been carefully handcrafted by people who know the language well and clearly care, there is a lot to read to understand it:$ go doc -all genai | wc -l 1155My simplistic initial wrapper was 200 lines of code totalone method, three types. Reading the entire implementation is 20 percent of the work of reading the documentation of the official package, and if you try to dig into its implementation, you will discover that it is a wrapper around another largely code-generated implementation with protos and grpc and the works. All I want is to cURL and parse a JSON object.There obviously comes a point in a project where Gemini is the foundation of the entire app, where nearly every feature is used, where building on gRPC aligns well with the telemetry system elsewhere in your organization, and where you should use the large official wrapper. But most of the time, it's so much more time consuming to do so because we almost always want only some wafer-thin sliver of whatever API we need to use today. So custom clients, largely written by a GPU, are far more effective for getting work done.So I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.Automating these observations: sketch.devAs a programmer, my instinct is to make computers do work for me. It's a lot of work getting value out of LLMshow can a computer do it?I believe the key to solving a problem is not to overgeneralize. Solve a particular problem and then expand slowly. So instead of building a general-purpose UI for chat programming that is just as good at COBOL as it is for Haskell, we want to focus on one particular environment. The bulk of my programming is in Go, so what I want is easy to imagine for a Go programmer:Something like the Go playground, built around editing a package and testsA chat interface onto editable codeA little UNIX env where we can run go get and go testgoimports integrationgopls integrationAutomatic model feedback: on model edit run go get, go build, go test, feedback missing packages, compiler errors, test failures to the model to try and get them fixed automaticallyA few of us have built an early prototype of this: sketch.dev.The goal is not to be a Web IDE but rather to challenge the notion that chat-based programming even belongs in what is traditionally called an IDE. IDEs are collections of tools arranged for people. It is a delicate environment where I know what is going on. I do not want an LLM spewing its first draft all over my current branch. While an LLM is ultimately a developer tool, it is one that needs its own IDE to get the feedback it needs to operate effectively.Put another way, we didnt embed goimports into sketch for it to be used by humans but to get Go code closer to compiling using automatic signals so that the compiler can provide better error feedback to the LLM driving it. It might be better to think of sketch.dev as a Go IDE for LLMs.This is all very recent work with a lot left to do, e.g. git integration so we can load existing packages for editing and drop the results on a branch. We also need better test feedback and more console control. (If the answer is to run sed, run sed. Be you the human or the LLM.) We are still exploring, but we're convinced that focusing an environment for a particular kind of programming will give us better results than the generalized tool.David Crawshaw is a co-founder (and former CTO) of Tailscale, lives in the Bay Area, and is building sketch.dev. He has been programming for 30 years and is planning on another 30. 11 Comments
    0 Comments 0 Shares 28 Views
  • ARSTECHNICA.COM
    NASA defers decision on Mars Sample Return to the Trump administration
    4th and long NASA defers decision on Mars Sample Return to the Trump administration "We want to have the quickest, cheapest way to get these 30 samples back." Stephen Clark Jan 8, 2025 1:44 pm | 24 This photo montage shows sample tubes shortly after they were deposited onto the surface by NASAs Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS This photo montage shows sample tubes shortly after they were deposited onto the surface by NASAs Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor nearly four years, NASA's Perseverance rover has journeyed across an unexplored patch of land on Marsonce home to an ancient river deltaand collected a slew of rock samples sealed inside cigar-sized titanium tubes.These tubes might contain tantalizing clues about past life on Mars, but NASA's ever-changing plans to bring them back to Earth are still unclear.On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the "sky crane" landing system demonstrated on the agency's two most recent Mars rovers. The other option would be to outsource the lander to the space industry.NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency's 15th administrator last month."This is going to be a function of the new administration in order to fund this," said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?The Trump White House is expected to emphasize "results and speed" with NASA's space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billiondouble the previous cost projectionand wouldn't get the Mars specimens back to Earth until 2040.This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission."We want to have the quickest, cheapest way to get these 30 samples back," Nelson said.How much for these rocks?NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL's previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.NASA previously deleted a "fetch rover" from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion. Artist's illustration of SpaceX's Starship approaching Mars. Credit: SpaceX JPL will have a "key role" in both paths for MSR, said Nicky Fox, head of NASA's science mission directorate. "To put it really bluntly, JPL is our Mars center in NASA science."If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.Without MSR, engineers at the Jet Propulsion Laboratory don't have a flagship-class mission to build after the launch of NASA's Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA's Psyche asteroid mission, and it's not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.Ars submitted multiple requests to interview Laurie Leshin, JPL's director, in recent months to discuss the lab's future, but her staff declined.Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively."The main difference is in the landing mechanism," Fox said.To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year's budget, Nelson said.NASA officials didn't identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX's Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martineach with their own lander conceptswere among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon's Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars. This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL's proposed "sky crane" architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL The science community has long identified a Mars Sample Return mission as the top priority for NASA's planetary science program. In the National Academies' most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program's cost should not undermine other planetary science missions.Teeing up for cancellation?That's exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA's planetary science division to institute a moratorium on starting new missions."The decision about Mars Sample Return is not just one that affects Mars exploration," said Curt Niebur, NASA's lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. "Its going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this."Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns."We can wait another year, or we can get started now," Rocket Lab posted on X. "Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031."Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA's recent Mars missions. "We can deliver MSR mission success too," the company said. Rocket Lab's concept for a Mars Sample Return mission. Credit: Rocket Lab Although NASA's deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission."They need to flesh out all of the possibilities of whats required in the engineering for the commercial option," Nelson said.On the program's current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration's goals for space exploration. In an interview with Ars last week, Nelson said he did not want to "put the new administration in a box" with any significant MSR decisions in the waning days of the Biden administration.One source with experience in crafting and implementing US space policy told Ars that Nelson's deferral on a decision will "tee up MSR for canceling." Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.Science and geopoliticsWhether it's with robots or humans, there's a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere."Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system," Fox said. "We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also ... getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system."Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars."The samples that we have taken by Perseverance actually predatethey are older than any of the samples or rocks that we could take here on Earth," Fox said. "So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing."Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.In a statement, the Planetary Society said it is "concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.""It has been more than two years since NASA paused work on MSR," the Planetary Society said. "It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover."We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface."China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA's plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China's mission will scoop up rocks and soil near its landing site."Theyre just going to have a mission to grab and gogo to a landing site of their choosing, grab a sample and go," Nelson said. "That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that theres a race? Of course, people will say that, but its two totally different missions."Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump's NASA transition team."I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return," Nelson said. "I can't imagine that they don't. I don't think we want the only sample return coming back on a Chinese spacecraft."Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 24 Comments
    0 Comments 0 Shares 28 Views
  • WWW.NEWSCIENTIST.COM
    Sleeping pills disrupt how the brain clears waste
    During sleep, the brain flushes out toxins that accumulate throughout the dayRobert Reader/Getty ImagesSleeping pills might help you doze off, but the sleep you get may not be as restorative. When mice were given zolpidem, a medication commonly found in sleeping pills such as Ambien, it prevented their brains from effectively clearing waste during sleep.Sleep is critical for removing waste from the brain. At night, a clear liquid called cerebrospinal fluid circulates around brain tissues, flushing out toxins through a series of thin tubes known as the glymphatic system. Think of it like a dishwasher that the brain turns on when asleep, says Maiken Nedergaard at the University of Rochester Medical Center in New York. However, the mechanism that pushes fluid through this network wasnt well understood until now. AdvertisementNedergaard and her colleagues implanted optical fibres in the brains of seven mice. By illuminating chemical compounds in the brain, the fibres let them track the flow of blood and cerebrospinal fluid during sleep.They found that as levels of a molecule called norepinephrine (also called noradrenaline) rise, blood vessels in the brain constrict, decreasing the volume of blood and allowing cerebrospinal fluid to rush into the brain. When norepinephrine levels fall, blood vessels expand, pushing cerebrospinal fluid back out. In this way, fluctuations in norepinephrine during non-rapid eye movement (NREM) sleep stimulate blood vessels to act like a pump for the glymphatic system, says Nedergaard.This finding reveals that norepinephrine plays a crucial role in cleaning waste out of the brain. Previous research has shown that, as we sleep, our brains release norepinephrine in a slow, oscillating pattern. These norepinephrine waves occur during NREM, which is a crucial sleep stage for memory, learning and other cognitive functions. Get the most essential health and fitness news in your inbox every Saturday.Sign up to newsletterNext, the researchers treated six mice with zolpidem, a sleep medication commonly sold under the brand names Ambien and Zolpimist. While the mice fell asleep faster than those treated with a placebo, the flow of cerebrospinal fluid in their brains dropped by roughly 30 per cent, on average. In other words, their brain doesnt get cleaned very well, says Nedergaard.Although the experiment tested zolpidem, nearly all sleeping pillsinhibit the production of norepinephrine. This suggests they may interfere with the brains ability to flush out toxins.It is too soon to tell whether these results will translate to humans. Human sleep architecture is still fairly different than a mouse, but we do have the same brain circuit that was studied here, says Laura Lewis at the Massachusetts Institute of Technology. Some of these fundamental mechanisms are likely to apply to us as well.If sleeping pills do interfere with the brains ability to remove toxins during sleep, that means we must develop new sleep medications, says Nedergaard. Otherwise, we risk exacerbating sleep problems, potentially worsening brain health in the process.Journal reference:Cell DOI: 10.1016/j.cell.2024.11.027Topics:
    0 Comments 0 Shares 28 Views