• ARSTECHNICA.COM
    New NOVA doc puts Revolutionary War weapons to the test
    not just men with muskets New NOVA doc puts Revolutionary War weapons to the test Pitting the Brown Bess against the long rifle, testing the first military submarine, and more. Jennifer Ouellette – Apr 14, 2025 10:30 am | 0 Credit: GBH/NOVA Credit: GBH/NOVA Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The colonial victory against the British in the American Revolutionary War was far from a predetermined outcome. In addition to good strategy and the timely appearance of key allies like the French, Continental soldiers relied on several key technological innovations in weaponry. But just how accurate is an 18th-century musket when it comes to hitting a target? Did the rifle really determine the outcome of the war? And just how much damage did cannon inflict? A team of military weapons experts and re-enactors set about testing some of those questions in a new NOVA documentary, Revolutionary War Weapons. The documentary examines the firing range and accuracy of Brown Bess muskets and long rifles used by both the British and the Continental army during the Battles of Lexington and Concord; the effectiveness of Native American tomahawks for close combat (no, they were usually not thrown as depicted in so many popular films, but there are modern throwing competitions today); and the effectiveness of cannons against the gabions and other defenses employed to protect the British fortress during the pivotal Siege of Yorktown. There is even a fascinating segment on the first military submarine, dubbed "the Turtle," created by American inventor David Bushnell. To capture all the high-speed ballistics action, director Stuart Powell relied upon a range of high-speed cameras called the Phantom Range. "It is like a supercomputer," Powell told Ars. "It is a camera, but it doesn't feel like a camera. You need to be really well-coordinated on the day when you're using it because it bursts for like 10 seconds. It doesn't record constantly because it's taking so much data. Depending on what the frame rate is, you only get a certain amount of time. So you're trying to coordinate that with someone trying to fire a 250-year-old piece of technology. If the gun doesn't go off, if something goes wrong on set, you'll miss it. Then it takes five minutes to reboot and get ready for the new shot. So a lot of the shoot revolves around the camera; that's not normally the case." Constraints to keep the run time short meant that not every experiment the crew filmed ended up in the final document, according to Powell. For instance, there was one experiment in a hypoxia chamber for the segment on the Turtle, meant to see how long a person could function once the sub had descended, limiting the oxygen supply. "We felt there was slightly too much on the Turtle," said Powell. "It took up a third of the whole film." Also cut, for similar reasons, were power demonstrations for the musket, using boards instead of ballistic gel. But these cuts were anomalies in the tightly planned shooting schedule; most of the footage found its way onscreen. The task of setting up all those field experiments fell to experts like military historian and weapons expert Joel Bohy, who is a frequent appraiser for Antiques Roadshow. We caught up with Bohy to learn more. Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA Redcoat re-enactors play out the Battle of Lexington. GBH/NOVA Continental Army re-enactors returning fire. GBH/NOVA Continental Army re-enactors returning fire. GBH/NOVA The Battle of Lexington, which took place on April 19, 1775. National Army Museum/Public domain The Battle of Lexington, which took place on April 19, 1775. National Army Museum/Public domain Continental Army re-enactors returning fire. GBH/NOVA The Battle of Lexington, which took place on April 19, 1775. National Army Museum/Public domain Ars Technica: Obviously you can't work with the original weapons because they're priceless. How did you go about making replicas as close as possible to the originals? Joel Bohy: Prior to our live fire studies, I started to collect the best contemporary reproductions of all of the different arms that were used. Over the years, I've had these custom-built, and now I have about 14 of them. So that we can cover pretty much every different type of arm used in the Revolution. I have my pick when we want to go out to the range and shoot at ballistics gelatin. We've published some great papers. The latest one was in conjunction with a bullet strike study where we went through and used modern forensic techniques to not only locate where each shooter was, what caliber the gun was, using ballistics rods and lasers, but we also had 18th-century house sections built and shot at the sections to replicate that damage. It was a validation study, and those firearms came in very handy. Ars Technica: What else can we learn from these kinds of experiments? Joel Bohy: One of the things that's great about the archeology end of it is when we're finding fired ammunition. I mostly volunteer with archaeologists on the Revolutionary War. One of my colleagues has worked on the Little Bighorn battlefield doing firing pin impressions, which leave a fingerprint, so he could track troopers and Native Americans across the battlefields. With [the Revolutionary War], it's harder to do because we're using smooth-bore guns that don't necessarily leave a signature. But what they do leave is a caliber, and they also leave a location. We GIS all this stuff and map it, and it's told us things about the battles that we never knew before. We just did one last August that hasn't been released yet that changes where people thought a battle took place. A replica Revolutionary War era rifle being fired in the field. GBH/NOVA A replica Revolutionary War era rifle being fired in the field. GBH/NOVA High-speed cameras capture the gunfire close-up. GBH/NOVA High-speed cameras capture the gunfire close-up. GBH/NOVA A replica Revolutionary War era rifle being fired in the field. GBH/NOVA High-speed cameras capture the gunfire close-up. GBH/NOVA We like to combine that with our live fire studies. So when we [conduct the latter], we take a shot, then we metal detect each shot, bag it, tag it. We record all the data that we see on our musket balls that we fired so that when we're on an archeology project, we can correlate that with what we see in the ground. We can see if it hits a tree, if it hits rocks, how close was a soldier when they fired—all based upon the deformation of the musket ball. Ars Technica: What is the experience of shooting a replica of a musket compared to, say, a modern rifle? Joel Bohy: It's a lot different. When you're firing a modern rifle, you pull the trigger and it's very quick—a matter of milliseconds and the bullet's downrange. With the musket, it's similar, but it's slower, and you can anticipate the shot. By the time the cock goes down, the flint strikes the hammer, it ignites the powder in the pan, which goes through the vent and sets off the charge—there's a lot more time involved in that. So you can anticipate and flinch. You may not necessarily get the best shot as you would on a more modern rifle. There's still a lot of kick and there's a lot more smoke because of the black powder that's being used. With modern smokeless powder, you have very little smoke compared to the muskets. Ars Technica: It's often said that throughout the history of warfare, whoever has the superior weapons win. This series presents a more nuanced picture of how such conflicts play out. John Hargreaves making David Bushnell's submarine bomb. GBH/Nova John Hargreaves making David Bushnell's submarine bomb. GBH/Nova Full-size model of the Turtle submarine on display at the Royal Navy submarine museum. Geni/CC BY-SA 4.0 Full-size model of the Turtle submarine on display at the Royal Navy submarine museum. Geni/CC BY-SA 4.0 The Turtle submarine design. Shubol3D/CC BY-SA 4.0 The Turtle submarine design. Shubol3D/CC BY-SA 4.0 Full-size model of the Turtle submarine on display at the Royal Navy submarine museum. Geni/CC BY-SA 4.0 The Turtle submarine design. Shubol3D/CC BY-SA 4.0 Joel Bohy: In the Revolutionary War, you have both sides basically using the same type of firearm. Yes, some were using rifles, depending on what region you were from, and units in the British Army used rifles. But for the most part, they're all using flintlock mechanisms and smoothbore guns. What comes into play in the Revolution is on the [Continental] side, they don't have the supply of arms that the British do. There was an embargo in place in 1774 so that no British arms could be shipped into Boston and North America. So you have a lot of innovation with gunsmiths and blacksmiths and clockmakers, who were taking older gun parts, barrels, and locks and building a functional firearm. You saw a lot of the Americans at the beginning of the war trying to scrape through with these guns made from old parts and cobbled together. They're functional. We didn't really have that lock-making and barrel-making industry here. A lot of that stuff we had imported. So even if a gun was being made here, the firing mechanism and the barrels were imported. So we had to come up with another way to do it. We started to receive a trickle of arms from the French in 1777, and to my mind, that's what helped change the outcome of the war. Not only did we have French troops arriving, but we also had French cloth, shoes, hats, tin, powder, flints, and a ton of arms being shipped in. The French took all of their old guns from their last model that they had issued to the army, and they basically sold them all to us. So we had this huge influx of French arms that helped resupply us and made the war viable for us. Close-up of a cannon firing. GBH/NOVA Close-up of a cannon firing. GBH/NOVA The storming of Redoubt No. 10 during the Siege of Yorktown. Public domain The storming of Redoubt No. 10 during the Siege of Yorktown. Public domain A sampling of tomahawk replicas. GBH/NOVA A sampling of tomahawk replicas. GBH/NOVA The storming of Redoubt No. 10 during the Siege of Yorktown. Public domain A sampling of tomahawk replicas. GBH/NOVA Ars Technica: There are a lot of popular misconceptions about the history of the American Civil War. What are a couple of things that you wish more Americans understood about that conflict? Joel Bohy: The onset of the American Revolution, April 1775, when the war began—these weren't just a bunch of farmers who grabbed their rifle from over the fireplace and went out and beat the British Army. These people had been training and arming themselves for a long time. They had been doing it for generations before in wars with Native forces and the French since the 17th century. So by the time the Revolution broke out, they were as prepared as they could be for it. "The rifle won the Revolution" is one of the things that I hear. No, it didn't. Like I said, the French arms coming in helped us win the Revolution. A rifle is a tool, just like a smoothbore musket is. It has its benefits and it has its downfalls. It's slower to load, you can't mount a bayonet on it, but it's more accurate, whereas the musket, you can load and fire faster and you can mount a bayonet. So the gun that really won the Revolution was the musket, not the rifle. It's all well and good to be proud of being an American and our history and everything else, but these people just didn't jump out of bed and fight. These people were training, they were drilling, they were preparing and arming and supplying not just arms, but food, cloth, tents, things that they would need to continue to have an army once the war broke out. It wasn't just a big—poof—this happened and we won. Revolutionary War Weapons is now streaming on YouTube and is also available on PBS. Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 0 Comments
    0 Commentarii 0 Distribuiri 25 Views
  • WWW.INFORMATIONWEEK.COM
    FICO CAO Scott Zoldi: Innovation Helps Operationalize AI
    Lisa Morgan, Freelance WriterApril 14, 20259 Min ReadBrain light via Alamy StockFICO Chief Analytics Officer Scott Zoldi has spent the last 25 years at HNC and FICO (which merged) leading analytics and AI at HNC FICO is well known in the consumer sector for credit scoring, while the FICO Platform helps businesses understand their customers better so they can provide hyper-personalized customer experiences.  “From a FICO perspective, it’s making sure that we continue to develop AI in a responsible way,” says Zoldi. “There’s a lot of [hype] about generative AI now and our focus has been around operationalizing it effectively so we can realize this concept of ‘the golden age of AI’ in terms of deploying technologies that actually work and solve business problems.” While today’s AI platforms make model governance and efficient deployment easier, and provide greater model development control, organizations still need to select an AI technique that best fits the use case. A lot of the model hallucinations and unethical behavior are based on the data on which the models are built, Zoldi says. “I see companies, including FICO, building their own data sets for specific domain problems that we want to address with generative AI. We’re also building our own foundational models, which is fully within the grasp of almost all organizations now,” he says.  Related:He says their biggest challenge is that you can never totally get rid of hallucinations. “What we need to do is basically have a risk-based approach for who’s allowed to use the outputs, when they’re allowed to use the outputs, and then maybe a secondary score, such as a AI risk score or AI trust score, that basically says this answer is consistent with the data on which it was built and the AI is likely not hallucinating.” Some reasons for building one’s own models include full control of how the model is built, and reducing the probability of bias and hallucinations based on the data quality.   “If you build a model and it produces an output, it could be hallucination or not. You won’t know unless you know the answer, and that’s really the problem. We produce AI trust scores at the same time as we produce the language models because they’re built on the same data,” says Zoldi. “[The trust score algorithms] understand what the large language models are supposed to do. They understand the knowledge anchors -- the knowledge base that the model has been trained on -- so when a user asks a question, it will look at the prompts, what the response was, and provide a trust score that indicates how well aligned the model’s response is aligned with the knowledge anchors on which the model was built. It’s basically a risk-based approach.” Related:FICO has spent considerable time focused on how to best incorporate small or focused language models as opposed to simply connecting to a generic GenAI model via an API. These “smaller” models may have eight to 10 billion parameters versus 20 billion or more than 100 billion, for example. He adds that you can take a small language model and achieve the same performance of a much larger model, because you can allow that small language model to spend more time reasoning out an answer. “And it’s powerful because it means that organizations that can only afford a smaller set of hardware can build a smaller model and deploy it in such a way that it’s less costly to use and just as performant as a large language model for a lot less cost, both in model development and in the inference costs of actually using it in a production sense.” Scott ZoldiThe company has also been using agentic AI. “Agentic AI is not new, but we now have frameworks that assign decision authority to independent AI operators. I’m okay with agentic AI, because you decompose problems into much simpler problems, and those simpler problems [require] much simpler models,” says Zoldi. “The next area is a combination of agentic AI and large language models, though building small language models and solving problems in a safe way is probably top of mind for most of our customers.” Related:For now, FICO’s primary use case for agentic AI is generating synthetic data to help counter and stay ahead of threat actors’ evolving methods. Meanwhile, FICO has been building focused language models that address financial fraud and scams, credit risks, originations, collections, behavior scoring and how to enable customer journeys. In fact, Zoldi recently created a focused model in only 31 days using a very small GPU. “I think we’ve all seen the headlines about how these humongous models with billions of parameters and thousands of GPUs, but you can go pretty far with a single GPU,” says Zoldi.  Challenges Zoldi Sees in 2025 One of the biggest challenges CIOs faces is anticipating the shifting nature of the US regulatory environment. However, Zoldi believes regulation and innovation go hand in hand. “I firmly believe that regulation and innovation inspire each other, but others are wondering how to develop their AI applications appropriately when [they’re not prescriptive],” says Zoldi. “If they don't tell you how to meet the regulation, then you're guessing how the regulations might change and how to meet them.”  Many organizations consider regulation a barrier to innovation rather than an inspiration for it.  “The innovation is basically a challenge statement like, ‘What does that innovation need to look like?’ so that I can meet my business objective, get a prediction, and have an interpretable model while also having ethical AI. That means better models,” says Zoldi. “Some people believe there shouldn’t be any constraints, but if you don’t have them, people will continue to ask for more data and ignore copyrights. You can also go down a deep learning path where models are uninterpretable, unexplainable, and often unethical.” What Innovation at FICO Looks Like At FICO, innovation and operationalization are synonymous. “We just built our first focused model last year. We’ve been demonstrating how small models on task specific domain problems perform just as well as large language models you can get commercially, and then we operationalize it,” says Zoldi. “That means I’m coming up with the most efficient way to embed AI in my software. We’re looking at unique software designs within our FICO Platform to enable the execution of these technologies efficiently.” Some time ago, Zoldi and his team wanted to add audit capabilities to the FICO Platform. To do it, they used AI blockchains. “An AI blockchain codifies how the model was developed, what needs to be monitored, and when you pull the model. Those are really important concepts to incorporate from an innovation perspective when we operationalize, so a big part of innovation is around operationalization. It’s around the sensible use of generative AI to solve very specific problems in the pockets of our business that would benefit most. We’re certainly playing with things like agentic AI and other concepts to see whether that would be the attractive direction for us in the future.” The audit capabilities FICO built can track every decision made on the platform, what decisions or configurations have changed, why they changed, when they changed and who changed them. “This is about software and the components, how strategies change, and how that model works. One of the main things is ensuring that there is auditing of all the steps that occur when an AI or machine learning model gets deployed in a platform, and how it’s being operated so you can understand things like who’s changing the model or strategy, who made that decision, whether it was tested prior to deployment and what the data is to support the solution. For us, that validation would belong in a blockchain so there is the immutable record of those configurations.” FICO uses AI blockchains when it develops and executes models, and to memorialize every decision made.  “Observability is a huge concept in AI platforms today. When we develop models, we have a blockchain that explains how we develop it so we can meet governance and regulatory requirements. On the same blockchain, are exactly what you need for real-time monitoring of AI models, and that wouldn't be possible if observability was not such a core concept in today's software,” says Zoldi. “Innovation in operationalization really comes from the fact that the software on which organizations build and deploy their decision solutions are changing as software and cloud computing advance, so the way we would have done it 25, 20, or 10 years ago is not the way that we do it most efficiently today. And that changes the way that we must operationalize. It changes the way we deploy and the way we even look at basic things like data.” Why Zoldi Has His Own Software Development Team Most software development organizations fall under a CIO or CTO, which is also true at FICO, though Zoldi also has his own software development team and works in partnership with FICO’s CTO.  “If a FICO innovation has to be operationalized, there must be a near term view to how it can be deployed. Our software development team makes sure that we come up with the right software architectures to deploy because we need the right throughput and latency,” says Zoldi. “Our CTO, Bill Waid, and I both focus a lot of our time on what are those new software designs so that we can make sure that all that value can be operationalized.” A specialized software team has been reporting to Zoldi for nearly 17 years, and one benefit is that it allows Zoldi to explore how he wants to operationalize, so he can make recommendations to the CTO and platform teams and ensure that new ideas can be operationalized responsibly. “If I want to take one of these focus language models and understand the most efficient way to deploy it and do inferencing, I'm not dependent on another team. It allows me to innovate rapidly, because everything that we develop in my team needs to be operationalized and be able to be deployed.  That way, I don't come with just an interesting algorithm and a business case. I come with an interesting algorithm, a business case and a piece of software so I can say these are the operating parameters of it. It allows me to make sure that I essentially have my own ability to prioritize where I need software talent focused from my types of problems for my AI solutions. And that's important because, I may be looking three years, four, or five years ahead, and need to know what we will need.” The other benefit is that the CTO and the larger software organization don’t have to be AI experts. “I think most high performing AI machine learning research teams like the one that I run, really need to have that software component so they have some control, and they're not in some sort of prioritization queue for getting some software attention,” says Zoldi. “Unless those people are specialized in AI, machine learning and MLOps, it’s going to be a poor experience. That’s why FICO is taking this approach and why we have the division of concerns.” About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commentarii 0 Distribuiri 27 Views
  • WWW.TECHNOLOGYREVIEW.COM
    A vision for the future of automation
    The manufacturing industry is at a crossroads: Geopolitical instability is fracturing supply chains from the Suez to Shenzhen, impacting the flow of materials. Businesses are battling rising costs and inflation, coupled with a shrinking labor force, with more than half a million unfilled manufacturing jobs in the U.S. alone. And climate change is further intensifying the pressure, with more frequent extreme weather events and tightening environmental regulations forcing companies to rethink how they operate. New solutions are imperative. DOWNLOAD THE FULL REPORT Meanwhile, advanced automation, powered by the convergence of emerging and established technologies, including industrial AI, digital twins, the internet of things (IoT), and advanced robotics, promises greater resilience, flexibility, sustainability, and efficiency for industry. Individual success stories have demonstrated the transformative power of these technologies, providing examples of AI-driven predictive maintenance reducing downtime by up to 50%. Digital twin simulations can significantly reduce time to market, and bring environment dividends, too: One survey found 77% of leaders expect digital twins to reduce carbon emissions by 15% on average. Yet, broad adoption of this advanced automation has lagged. “That’s not necessarily or just a technology gap,” says John Hart, professor of mechanical engineering and director of the Center for Advanced Production Technologies at MIT. “It relates to workforce capabilities and financial commitments and risk required.” For small and medium enterprises, and those with brownfield sites—older facilities with legacy systems— the barriers to implementation are significant. In recent years, governments have stepped in to accelerate industrial progress. Through a revival of industrial policies, governments are incentivizing high-tech manufacturing, re-localizing critical production processes, and reducing reliance on fragile global supply chains. All these developments converge in a key moment for manufacturing. The external pressures on the industry—met with technological progress and these new political incentives—may finally enable the shift toward advanced automation. Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarii 0 Distribuiri 38 Views
  • WWW.BUSINESSINSIDER.COM
    Katy Perry joined Lauren Sánchez for an 11-minute space flight today. Here's who joined them — and who designed their spacesuits.
    The NS-31 crew featured: Sánchez, an Emmy-winning journalist and finance of Amazon founder Jeff Bezos; pop star Perry; Gayle King, the award-winning CBS News anchor; Amanda Nguyen, a bioastronautics research scientist and civil rights activist; Aisha Bowe, a former NASA rocket scientist; and Kerianne Flynn, a film producer.Not only was Nguyen the first Vietnamese woman in space, but this was the first all-female space crew since 1963, when Valentina Tereshkova, a Russian engineer, crewed a solo flight.Sánchez told Elle magazine she chose the other crew members because they're all "storytellers in their own right. They're going to go up to space and be able to spread what they felt in different ways."Sánchez asked Fernando Garcia and Laura Kim, the creative directors of luxury brand Oscar de la Renta and co-founders of their own brand Monse, to make fashionable spacesuits for the crew, she told The New York Times. Garcia and Kim partnered with Creative Character Engineering, a Hollywood costume company, to create the Monse Blue Origin suits.On Sunday, Perry posted an Instagram video showing the capsule, explaining that they dubbed their crew "The Taking Up Space Crew" and promising to sing during the flight.During the flight, she sang, as she had previously promised, but it was hard to make out what.King said on the Blue Origin livestream that Perry sang "What a Wonderful World" by Louis Armstrong."It's not about me. It's not about singing my songs," Perry said of her song choice. "It's about a collective energy and making space for future women. It's about this wonderful world that we see right out there and appreciating it. This is all for the benefit of Earth."Meanwhile, Nguyen said on Instagram she would conduct multiple experiments on women's health and plants during the brief flight.
    0 Commentarii 0 Distribuiri 36 Views
  • WWW.VOX.COM
    The end of “college for all”
    Is college for everybody? According to Chelsea Waite, a senior researcher at the Center on Reinventing Public Education, the answer is no. And more students, parents, and educators are realizing it. Waite spent two years speaking to administrators, teachers, parents, and students at six high schools in New England to learn more about post-grad desires. The study, for the Center on Reinventing Public Education, specifically concerned New England high schools, but Waite says she’s heard from school leaders across the country that the findings resonate. “What we found is that the vision that they painted was that they want every single student in that school to have a pathway to a good life,” Waite says.What does it mean to have a “good life” in this context? And what does a path that doesn’t include college look like? That’s what we tackle on this week’s episode of Explain It to Me, Vox’s weekly call-in show. Check out the conversation between Waite and host Jonquilyn Hill; it’s been edited for length and clarity. You can listen to Explain It to Me on Apple Podcasts, Spotify, or wherever you get podcasts. If you’d like to submit a question, send an email to askvox@vox.com or call 1-800-618-8545.When I was a kid it felt like the purpose of high school was to prepare every single person to go off to a university. Has that changed now?Let’s go back to the beginning of high school, because I think that will help us answer where we are now. When high schools first started in the US, they were not universal and they were really designed for elites: largely white, male, middle- and upper-class students who would go to high school as a way to get them to higher education in order to then go into these leadership roles in society. Then from the 1910s to 1940s, there was a big high school movement that basically made high schools kind of like, mass education for everyone. The idea there is that we have a responsibility as a society to make sure that young people are prepared for the world. For some of them, that might mean college. For others, it might mean they’re better working with their hands and they should be in a different kind of job or career. As time went on, it became very clear that there was major inequality in who got access to what path.Yeah, I remember my dad telling me his school counselor said, “Maybe you should just join the military.” That phrasing feels weird for a number of reasons. (He eventually got his doctorate.)Take your dad’s experience and then compare it to sort of how you described your experience. I think that’s a great representation of what changed from the 1950s to ’70s all the way to the ’80s, ’90s and early 2000s, where there was really this recognition that we are not giving young people equal opportunities to get into college, which is associated with economic and social mobility and opportunity and higher earnings over your lifetime. There were lots of schools — including a number of charter schools — that opened with this “college for all” mission. Now fast-forward to sort of where we are now. There has been a lot of reckoning about how pushing every student to go to college and take on the cost of college without necessarily being really clear about what they want it to do for them means that we have a lot of students who enroll in college and then never complete a degree, take on a ton of debt, and generally kind of struggle to make college really work for them as a path to the rest of their career.Now we’re in this place where it’s a little more holistic: If you want to go to college, you can. If you want to join the military, you can. If you want to do a trade or start working, you can. Is this shift coming from the students themselves or is it coming from somewhere else?Some of it’s from students themselves. Students are genuinely questioning if college is worth it and if college is really the right thing for them, knowing what they know about themselves. RelatedThe incredible shrinking future of collegeWhat we’re hearing from students is that choosing to go to college has financial risk. There’s social pressure and social dynamics that students are not sure that they really want to take on, especially coming out of the pandemic. Some students didn’t even get a real full high school experience. They described not necessarily feeling ready to just jump into the college experience. I think it’s really a testament to students knowing what they themselves need.Parents are saying they just want their kids to be happy. I think every generation of parents to some degree would say that. But are parents really okay if that means their kids aren’t going to college?It’s mixed. We’re in a moment right now where a lot of people are kind of wrestling with this question. What we heard from many parents is that they really wanted their child to make the best choice for them. Parents are also seeing the data. There still is clear evidence that more education over your lifetime does mean more lifetime earnings on average. But the average is key there. If you actually look at the spread from the lowest to the highest earners at different levels of educational attainment there’s a whole lot of overlap. Do you see any resistance from high schools?We hear some. And here’s where I think it’s coming from: Teachers all went to college. So everybody in a school, for the most part, has gone through a path that’s included college at some point. So it is hard to kind of get out of your own experience and really recognize that taking an alternative pathway. Some parents and even teachers that we talked to said that they had some concerns about this shift to celebrating a bigger spectrum of post-secondary opportunities; that [it] means that the school is lowering expectations.That doesn’t have to be true. We are seeing schools where expectations remain really high. Every student graduates both prepared to go to college, if they choose it, and really knowledgeable about the kind of careers that they might want to pursue, including some that don’t involve a degree right away. However, I think the concern about lowering expectations is totally legitimate. There’s a big risk to guard against going backwards in time to that period where teachers and even some parents are saying, “some students are made for college and others are really better to go to the military or to go with their hands.”Are we still asking too much of students? Looking back, I was very fortunate that at 15 I wanted to become a journalist and I’m doing it as an adult. But that’s so rare. How are kids supposed to know what they wanna do with the rest of their lives? Are we asking too much of kids?I don’t think we are. I think the risk is that we create dead ends. If you at 15 said, “I want to be a journalist and that’s what is lighting me up right now.” And you went and got an internship with another journalist and you started to take some early college journalism classes, and then decided, “Actually, you know what? I want to be an engineer.” If you had shut off that opportunity to be an engineer, or more precisely if your school had shut off that opportunity by saying, “She’s in this track for journalism. We don’t really have to teach her science or math,” that is a dead-end. So we need schools that create no dead-ends for students, but encourage students to explore early on what kind of real world career they can imagine for themselves.See More:
    0 Commentarii 0 Distribuiri 48 Views
  • WWW.DAILYSTAR.CO.UK
    Hit Blizzard video game tipped for 'Netflix treatment' TV show as fans share hopes
    One industry insider has suggested that one Blizzard franchise is getting a Netflix adaptation, despite talks between both parties seemingly falling through a few years agoTech14:27, 14 Apr 2025Updated 14:28, 14 Apr 2025Freya is the newest addition to the Overwatch rosterIt's been a busy few weeks for Blizzard. The company confirmed that Blizzcon will return in 2026, a huge shakeup for Overwatch 2, and that next year will see a new Diablo 4 expansion.That was followed by a new roadmap for that game, as well as confirmation of a ten-year plan for the franchise, plus a recent Hearthstone expansion and World of Warcraft patch.‌Article continues belowIn all honesty, it feels as though Blizzard has been firing on all cylinders in recent months, but now one report suggests it has ambitions beyond gaming.Speaking on the Xbox Two podcast, Windows Central's Jez Corden has suggested Blizzard is perhaps working with Netflix to bring one of its iconic series to your TV screen.‌Just after the hour mark on the show above, Corden says "I'm not going to say what game it is, but there's a Blizzard franchise that's getting some kind of Netflix treatment".He also suggests the show could be revealed at Blizzcon 2026, but doesn't give any further information.It'd be fair to say Blizzard has a huge pool of tales to tell, whether the Netflix project is live action or animated.‌The studio's World of Warcraft is an MMO spanning decades, while the Warcraft franchise itself is packed with deep lore and memorable characters.It's been a hot minute since we've had a Starcraft game, but it remains close to fans' hearts, while Diablo's gory, dark world would lend itself nicely to an animated adventure.Season 15 is big, but it'll lead into even more impressive changes for Overwatch 2‌For our money, though, it feels like Overwatch is the prime candidate here. Blizzard's Hero shooter helped establish a genre, and has a huge cast of characters.It's also no stranger to stunning cinematics – we still get goosebumps from the Zero Hour trailer.‌Netflix is also no stranger to gaming adaptations. Aside from the (excellent) Arcane (based on League of Legends), Cyberpunk Edgerunners was fantastic, while Castlevania was also well received.The Witcher was perhaps a little more mixed, while Resident Evil disappointed, but the company has leaned into the 'geek culture' crossover with regular Geeked Week events.Last year's one confirmed a Splinter Cell animated show, the recently released Devil May Cry anime, and much more.Article continues belowFor the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.‌‌‌
    0 Commentarii 0 Distribuiri 51 Views
  • METRO.CO.UK
    Nintendo Switch 2 online menu revealed and it’s a small improvement
    A modest makeover (Nintendo) New pictures provide a better look at the Nintendo Switch 2’s user interface, and it looks like a minor upgrade over the original console. The Nintendo Switch 2 might sport mouse functionality and vastly improved technical specifications, but the console itself is largely an iterative upgrade. This is most clear in the system’s user interface, shown off during the Switch 2 Direct earlier this month, which is basically the same as the original console aside from some new icons for social features like GameChat and transferable Game-Key Cards. Another part of the system’s interface has now been revealed via Nintendo’s website, and it’s another slightly conservative revamp. The images in question show off the redesigned Switch Online app, which is accessible from the home menu and lets players view the retro game library, membership perks, and details of your subscription. As you can see from the pictures, the presentation overall looks more appealing and tidier, but it isn’t a huge leap from the original app’s design. More retro game widgets (Nintendo) The Switch 2 will still have the separate retro console apps, where you’ll actually be able to play games from the Game Boy, GameCube and others depending on your subscription, but those are expected to look the same. More Trending GameCube games will be available for Nintendo Switch Online + Expansion Pack subscribers who own a Switch 2 when the console launches on June 5. The Legend Of Zelda: The Wind Waker, SoulCalibur 2, and F-Zero GX will be playable at launch, with the likes of Super Mario Sunshine and Pokémon Colosseum set to arrive at a later date. Switch your profile picture (Nintendo) Unfortunately, the eShop on Switch 2 will not have any background music, but Nintendo has said it will run better than the Switch’s storefront. Speaking to press about the eShop, Nintendo Switch 2 producer Kouchi Kawamoto, said: ‘I wanted to make sure that it was a smooth experience. That the scrolling of the list doesn’t stall, that it’s very smooth, pages load fast.’ The Switch 2 is set to launch with Mario Kart World, which has caused some commotion over its price. The Switch 2 interface is very familiar (Nintendo) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Commentarii 0 Distribuiri 43 Views
  • GIZMODO.COM
    Most Carbon-Rich Asteroids Never Make It to Earth—and Now We Know Why
    Earth’s meteorite collection just got called out for being a little biased—and what’s more, a team of astronomers pinpointed exactly why that bias occurs. Carbonaceous asteroids are all over our solar system, both in the main belt and closer to Earth. But very few of the carbon-rich rocks are actually found on Earth, comprising just 4% of the meteorites recovered on our planet’s surface. The astronomical team wanted to understand what causes the discrepancy. Their findings, published today in Nature Astronomy, indicate that carbon asteroids get obliterated by the Sun and Earth’s atmosphere before they can make it to ground. “We’ve long suspected weak, carbonaceous material doesn’t survive atmospheric entry,” said Hadrien Devillepoix, a researcher at Australia’s Curtin Institute of Radio Astronomy and co-author of the paper, in a university release. “What this research shows is many of these meteoroids don’t even make it that far: they break apart from being heated repeatedly as they pass close to the Sun.” The team analyzed nearly 8,000 meteoroid impacts and 540 potential falls from 19 different observation networks around the globe to understand why carbonaceous asteroids are so rare on Earth. Carbonaceous meteorites on Earth give scientists the unique opportunity to study some of the oldest material in our solar system. But researchers also recover carbon-rich asteroid material from space; Japan’s Hayabusa2 mission and NASA’s OSIRIS-REx both plucked rocky material from distant asteroids and brought those samples to Earth, where they can be investigated to a fuller extent than remote observations allow. “Carbon-rich meteorites are some of the most chemically primitive materials we can study—they contain water, organic molecules and even amino acids,” said Patrick Shober, a researcher at the Paris Observatory and co-author of the paper, in the same release. “However, we have so few of them in our meteorite collections that we risk having an incomplete picture of what’s actually out there in space and how the building blocks of life arrived on Earth,” Shober added. The team found that meteoroids created by tidal disruption events—when asteroids swing by planets closely enough to be broken apart by the planet’s forces—are particularly fragile, and survive atmospheric entry less than other types of asteroids. Only the hardy carbon-rich asteroids make it to Earth, after surviving the Sun’s heat and the fiery burnup that occurs when entering Earth’s atmosphere. If astronomers want to get a proper assessment of the diversity of carbon-rich rocks, they’ll have to consider those that couldn’t survive the journey to Earth.
    0 Commentarii 0 Distribuiri 41 Views
  • WWW.ARCHDAILY.COM
    Verde School / Ricardo Gusmao Arquitetos + Guido Otero Arquitetura
    Verde School / Ricardo Gusmao Arquitetos + Guido Otero ArquiteturaSave this picture! Area Area of this architecture project Area:  48438 ft² Year Completion year of this architecture project Year:  2023 Photographs Photographs:Pedro Kok, Manuel Sá Manufacturers Brands with products used in this architecture project Manufacturers:  Day Brasil, Elevatec, Eliane, Hunter Douglas, Intercity, Keramika, Knauf, Metadil, Novidario, Pormade, San Marmore, Santiglass, Turra Engenharia, Victor Cobervickas Lead Architects: Ricardo Gusmão, Guido Otero More SpecsLess Specs Save this picture! Text description provided by the architects. Escola Verde´s new unit is located on Canal 6, 50 meters from the beach. In addition to its proximity to the beach, the strategic position of the land within the block also provided important elements for the project: it used to be an empty space between high rise vertical buildings facing the sea and a village of houses facing the mainland. The school building thus operates at the boundary between the scales of the neighbors and seeks ways to relate to the canal and the sea, creating a unique environment for students and the local community.Save this picture!© Pedro KokSave this picture!Save this picture!Save this picture!The 4,500.00m² building is divided into four floors: the ground floor organized around the covered arrival courtyard, which has a double ceiling height, crosses the building longitudinally, and houses more collective programs such as recreation and cafeteria. The first floor contains the administrative spaces, and the two upper floors are dedicated to classrooms, laboratories, and art rooms, accessed by a large support space. The roof, completely open, serves as a large open square that allows multiple uses and relates extensively to the surrounding views, expanding possibilities for outdoor activities.Save this picture!Save this picture!Save this picture!To speed up construction, the adopted building method was precast concrete, and the walls were made of soil-cement bricks with aluminum frames. These materials were left exposed, and, like the installations, they give a didactic character to the construction and facilitate its maintenance. For solar protection, two types of solar shading were installed: on the ground floor, there is a perforated metal sheet that marks the more collective spaces; on the upper floors, horizontal brises were designed with a metal structure and planters.Save this picture! Project gallerySee allShow less Project locationAddress:Santos, BrazilLocation to be used only as a reference. It could indicate city/country but not exact address.About this office MaterialsSteelConcreteMaterials and TagsPublished on April 14, 2025Cite: "Verde School / Ricardo Gusmao Arquitetos + Guido Otero Arquitetura" [Escola Verde / Ricardo Gusmao Arquitetos + Guido Otero Arquitetura] 14 Apr 2025. ArchDaily. Accessed . <https://www.archdaily.com/1028943/verde-school-ricardo-gusmao-arquitetos-plus-guido-otero-arquitetura&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Commentarii 0 Distribuiri 40 Views
  • WWW.TECHNEWSWORLD.COM
    NTT’s Upgrade 2025 Event: A Showcase of Possibility Without Purpose
    NTT is one of the most powerful technology conglomerates in the world. As a telecommunications giant, the company operates a global network infrastructure, ranks among the top three data center providers worldwide, and employs more than 330,000 people. Through its NTT Data subsidiary, it boasts deep expertise in cybersecurity, next-gen networking, quantum research, and enterprise IT services. With these assets, NTT should be positioned as an industry-defining force in the AI-powered future. However, at its recent NTT Research Upgrade 2025 event, it chose not to lead. Instead, a bold, clear articulation of vision and ambition became a disjointed series of ideas, framed more as academic explorations than strategic imperatives. Lofty language, vague metaphors, and philosophical asides dominated the company’s core messaging. While the technology showcased was real and impressive, the leadership tone was tentative — more deferential to partners than a directive to the market. In short, NTT missed the moment, which is a strategic misstep for a company with its capabilities. Ambitious Language, Unclear Strategy NTT Executive Chairman Jun Sawada opened the keynote with expansive, almost poetic language. He spoke about “upgrading reality,” building a “network of AIs,” and promoting a “pluralistic value society.” He referenced philosophical inquiries and the creation of the Kyoto Institute of Philosophy, aiming to bring together Eastern and Western thought in shaping the future of human-AI coexistence. These are intriguing themes. However, they lacked anchoring in a keynote intended to showcase technology leadership — which turned into a couple of fireside-style chats. There was little connection to product roadmaps, customer impact, or near-term business outcomes. Sawada spoke of IOWN — the company’s ambitious, Innovative Optical and Wireless Network — but left unexplained how this infrastructure uniquely enables or differentiates NTT in the AI race. Sawada mentioned the importance of connecting AIs to build a “heterogeneous world” but stopped short of detailing what NTT is building to realize that goal. The result? The keynote felt more like a symposium on the philosophy of technology than a declaration of leadership. For NTT, whose customer base includes some of the world’s most demanding enterprise and government clients, that was a missed opportunity to inspire confidence. Research Focus Felt Like Strategic Retreat Kazu Gomi, CEO of NTT Research, doubled down on the company’s commitment to basic research. He described NTT’s Silicon Valley-based research labs and their work in applied physics, cryptography, and cardiovascular bio digital twins. To his credit, he was refreshingly candid in admitting that much of this research may not become a commercial product in the near term. Again, this level of scientific humility is understandable from a university lab. However, from a Fortune Global 100 enterprise, it comes across as strategic deferral. Gomi introduced a new concept called “Physics of AI,” which is meant to demystify how AI systems work and increase their trustworthiness. However, his description leaned heavily on metaphors, such as apples falling from trees and Newtonian physics, and lacked tangible proof points or technical framing. Yet, there was no discussion of frameworks, metrics, or deliverables. There is no sense of how this new initiative would generate a competitive advantage for NTT or its customers. There was no call to action for industry partners; it was just an abstract pitch. In a world where competitors like Microsoft, Nvidia, Amazon, and Google are integrating AI into everything from data centers to developer tools, NTT’s insistence on staying in the research phase feels like a reluctance to lead. Snowflake and NTT Data Filled the Vacuum Ironically, the most substantive content of the event came from outside NTT’s core corporate leadership. Snowflake CEO Sridhar Ramaswamy delivered a practical, compelling roadmap for enterprise AI adoption. He described how Snowflake helps clients extract value from their internal data, build AI copilots, and deploy production-grade chatbots. He emphasized trust, efficiency, and ease of deployment—core values that enterprise buyers care deeply about. NTT Data CEO Abhijit Dubey provided a similarly focused perspective. He highlighted four innovation areas: quantum computing, voice mining, attribute-based encryption, and trusted data spaces. Each was tied to real-world customer stories: BMW, Japan Post Bank, and smart city pilots. These were clear, testable use cases tied to value delivery. But therein lies the disconnect. These leaders made the case for innovation. NTT itself did not. While Snowflake and NTT Data sounded like organizations building and delivering the future, NTT Corporation remained in the background — host to a conversation it should have been driving. Failure To Connect the Dots NTT has everything it needs to be a global technology leader in the AI era: A top-tier worldwide fiber and wireless network Massive data center presence, especially in high-growth markets like India Deep talent in quantum science, cybersecurity, and encryption An enterprise customer base that spans industries and continents An IT services division (NTT Data) that is among the largest in the world But Upgrade 2025 failed to tie these assets into a coherent strategic narrative. There was no aggressive positioning of IOWN as the backbone for AI-native networking. There was no clear articulation of how NTT’s data center footprint enables sovereign AI or edge intelligence. No insight was provided into how NTT’s research labs fuel product innovation pipelines, and perhaps most surprisingly, there was no bold statement about how NTT intends to compete in an industry that is moving at unprecedented speed. NTT didn’t use its stage to declare leadership. It used it to float ideas and host guests. For a company of its size, that’s a passive stance and a missed opportunity to drive the conversation forward. The Stakes Are Higher Than Ever Enterprise buyers are not looking for abstract promises. They want trusted partners who can deliver secure, scalable AI infrastructure, integrate with their systems, and help them navigate complexity. Government clients want solutions they can control and deploy locally. Telecom and cloud operators are battling to redefine the role of connectivity in an AI-native world. NTT could lead all of these markets. It has the assets. It has the talent, and, most crucially, it has the credibility. But to compete, it must first lead with confidence and clarity. Upgrade 2025 was a chance to assert its leadership and showcase NTT’s technology, purpose, business model, and vision for where the AI-powered enterprise is going. Instead, it chose the safety of abstraction. Final Thoughts: A Stage Missed NTT doesn’t need consumer recognition. It doesn’t sell smartphones or search engines. Its impact is felt behind the scenes — where networks are built, data is stored, and enterprise systems are connected. That makes forums like Upgrade 2025 especially important. They are the rare public moments where NTT can show the market what it stands for. Unfortunately, the company didn’t do itself any favors at this year’s event. It did not strengthen its position as an unabashed thought leader. It did not clarify how its formidable portfolio of technologies, across infrastructure, research, and services, comes together to solve enterprise and societal challenges — and it certainly did not project urgency. In a market where AI adoption is now a race — not a roadmap — NTT’s hesitance to lead was palpable. The company may have world-class technology and ideas, but at Upgrade 2025, it left leadership on the table.
    0 Commentarii 0 Distribuiri 42 Views