• WWW.TECHNOLOGYREVIEW.COM
    The Download: how AI is changing music, and a US city’s AI experiment
    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. AI is coming for music, too While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains. By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman.Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI. Read the full story.—James O'Donnell This story is from the next edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and get a copy of the magazine when it lands! A small US city is experimenting with AI to find out what residents want Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city? After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what a 25-year plan for their city should include. They could also vote on whether they agreed or disagreed with other ideas. But some researchers question whether soliciting input in this manner is a reliable way to understand what a community wants. Read the full story.—James O'Donnell How Colossal Biosciences is attempting to own the “woolly mammoth” What’s new: Colossal Biosciences not only wants to bring back the woolly mammoth—it wants to own it, too. MIT Technology Review has learned the Texas startup is seeking a patent that would give it exclusive legal rights to create and sell gene-edited elephants containing ancient mammoth DNA.But why? Ben Lamm, the CEO of Colossal, says that holding patents on the mammoth and other creatures would “give us control over how these technologies are implemented, particularly for managing initial releases where oversight is critical.” Patents, which usually last 20 years, could provide “a clear legal framework during the critical transition period when de-extinct species are first reintroduced,” he says. Read the full story.—Antonio RegaladoIf you’re interested in what else Collossal’s been up to, check out: + Game of clones: Colossal’s new wolves are cute, but are they dire? The company recently claimed it has revived an extinct species, but scientists are skeptical. Read the full story.+ As a first step towards resurrecting woolly mammoths, Colossal created these adorable gene-edited ‘woolly mice.’ The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI might be  building its own social network It’s a move that’s likely to enrage Elon Musk even further. (The Verge)+ Musk and Sam Altman are still locked in a legal dispute. (CNBC)+ There are plenty of reasons why OpenAI might want to build a social feed. (NY Mag $) 2 Buying directly from Chinese factories is not a good ideaDespite what TikTok tells you. (WP $) + The popularity of apps allowing shoppers to buy from factories is skyrocketing. (WSJ $)3 Mark Zuckerberg tried to settle Meta’s antitrust case last month Unfortunately for him, the head of the FTC was unmoved by the offer. (WSJ $)+ The CEO considered spinning off Instagram in 2018, apparently. (Reuters)+ The first two days of the trial have focused on 2010-2014. (Bloomberg $)4 A whistleblower has shed light on how DOGE may have taken private dataLabor law experts are certain the information is completely unrelated to making the government more efficient. (NPR) + Federal workers are wading through the chaos. (The Atlantic $)+ A lot of DOGE’s fraud claims are old news. (The Guardian)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)5 Nvidia is bracing itself to lose $5.5 billion As a result of the Trump administration’s new chip sales restrictions. (FT $)+ Its new H20 chip now requires a special license. (The Guardian)+ The company’s shares plunged in response to the news. (CNN) 6 We’re getting closer to a cure for seasonal allergies An injection usually administered to treat asthma could hold the key. (Vox)7 Maybe LLMs don’t need language after allAllowing them to process queries in mathematical spaces could improve their output. (Quanta Magazine) + Why does AI being good at math matter? (MIT Technology Review) 8 YouTube was given an exemption from Australia’s social media ban for under-16sEven though it’s the most popular platform for children by far. (Bloomberg $) 9 Social media can still fight hate without censorship Although X is probably too far gone, admittedly. (The Atlantic $)+ How to fix the internet. (MIT Technology Review) 10 How to survive on Mars Thanks to water-rich asteroids. (Wired $)+ The quest to figure out farming on Mars. (MIT Technology Review)Quote of the day "How else can OpenAI acquire new training data at scale going forward?" —Bill Gross, the founder of tech incubator Idealab, believes OpenAI has a very clear motive for wanting to build its own social network, Insider reports. The big story How refrigeration ruined fresh food Three-quarters of everything in the average American diet passes through the cold chain—the network of warehouses, shipping containers, trucks, display cases, and domestic fridges that keep meat, milk, and more chilled on the journey from farm to fork. As consumers, we put a lot of faith in terms like “fresh” and “natural,” but artificial refrigeration has created a blind spot. We’ve gotten so good at preserving (and storing) food, that we know more about how to lengthen an apple’s life span than a human’s, and most of us don’t give that extraordinary process much thought at all. But all that convenience has come at the expense of diversity and deliciousness. Read the full story. —Allison Arieff We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + This list of the 30 best fiction books of the last 30 years does not disappoint.+ Travel ghost stories? Truly chilling.+ It’s time to caulk the wagon—the seminal Oregon Trail is celebrating its 50th anniversary.+ These photos of fifties fashion are simply the best.
    0 Σχόλια 0 Μοιράστηκε 79 Views
  • WWW.BUSINESSINSIDER.COM
    OpenAI just gave itself wiggle room on safety if rivals release 'high-risk' models
    Sam Altman has defended the company's shifting approach to AI safety. Didem Mente/Getty Images 2025-04-16T11:54:19Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? OpenAI says it could adjust safety standards if a rival launches a risky model without safeguards. The company launched GPT-4.1 this week without a safety report. Some former employees say OpenAI is scaling back safety promises to stay competitive. OpenAI doesn't want its AI safeguards to hold it back if its rivals don't play by the same rules.In a Tuesday blog post, OpenAI said it might change its safety requirements if "another frontier AI developer releases a high-risk system without comparable safeguards."The company said it would only do so after confirming the risk landscape had changed, publicly acknowledging the decision, and ensuring it wouldn't meaningfully increase the chance of severe harm.OpenAI shared the change in an update to its "Preparedness Framework," the company's process for preparing for AI that could introduce "new risks of severe harm." Its safety focus includes areas such as cybersecurity, chemical threats, and AI's ability to self-improve.The shift comes as OpenAI has come under fire for taking different approaches to safety in recent months.On Monday, it launched the new GPT-4.1 family of models without a model or system card — the safety document that typically accompanies new releases from the company. An OpenAI spokesperson told TechCrunch the model wasn't "frontier," so a report wasn't required.In February, OpenAI launched its Deep Research tool weeks before publishing its system card detailing safety evaluations.These instances have added to ongoing scrutiny of OpenAI's commitment to safety and transparency in its AI model releases."OpenAI is quietly reducing its safety commitments," Steven Adler, a former OpenAI safety researcher, posted on X Wednesday in response to the updated framework. Adler said that OpenAI's previous framework, published in December 2023, included a clear requirement to safety test fine-tuned models. He said the latest update only requires testing if the model is being released with open weights, which is when a model's parameters are made public. "I'd like OpenAI to be clearer about having backed off this previous commitment," he added.OpenAI did not immediately respond to a Business Insider request for comment.Ex-OpenAI staff back Musk's lawsuitAdler isn't the only former employee speaking out about safety concerns at OpenAI.Last week, 12 former OpenAI employees filed a motion asking a judge to let them weigh in on Elon Musk's lawsuit against the company. In a proposed amicus brief filed on Friday, they said that OpenAI's planned conversion to a for-profit entity could incentivize the company to cut corners on safety and concentrate power among shareholders.The group includes former OpenAI staff who worked on safety, research, and policy.Altman defends OpenAI's approachSam Altman, OpenAI's CEO, defended the company's evolving safety approach in a Friday interview at TED2025. He said OpenAI's framework outlines how it evaluates "danger moments" before releasing a model.Altman also addressed the idea that OpenAI is moving too fast. He said that AI companies regularly pause or delay model releases over safety concerns, but acknowledged that OpenAI recently relaxed some restrictions on model behavior. "We've given users much more freedom on what we would traditionally think about as speech harms," he said.He explained that the change reflects a "more permissive stance" shaped by user feedback. "People really don't want models to censor them in ways that they don't think make sense," he said. Recommended video
    0 Σχόλια 0 Μοιράστηκε 74 Views
  • WWW.VOX.COM
    The real argument artists should be making against AI
    Every artist I know is furious. The illustrators, the novelists, the poets — all furious. These are people who have painstakingly poured their deepest yearnings onto the page, only to see AI companies pirate their work without consent or compensation.The latest surge of anger is a response to OpenAI integrating new image-generation capabilities into ChatGPT and showing how they can be used to imitate the animation style of Studio Ghibli. That triggered an online flood of Ghiblified images, with countless users (including OpenAI CEO Sam Altman) getting the AI to remake their selfies in the style of Spirited Away or My Neighbor Totoro.Couple that with the recent revelation that Meta has been pirating millions of published books to train its AI, and you can see how we got a flashpoint in the culture war between artists and AI companies.This story was first featured in the Future Perfect newsletter.Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.When artists try to express their outrage at companies, they say things like, “They should at least ask my permission or offer to pay me!” Sometimes they go a level deeper: “This is eroding the essence of human creativity!”These are legitimate points, but they’re also easy targets for the supporters of omnivorous AI. These defenders typically make two arguments.First, using online copyrighted materials to train AI is fair use — meaning, it’s legal to copy them for that purpose without artists’ permission. (OpenAI makes this claim about its AI training in general and notes that it allows users to copy a studio’s house style — Studio Ghibli being one example — but not an individual living artist. Lawyers say the company is operating in a legal gray area.) Second, defenders argue that even if it’s not fair use, intellectual property rights shouldn’t be allowed to stand in the way of innovation that will greatly benefit humanity.The strongest argument artists can make, then, is that the unfettered advance of AI technologies that experts can neither understand nor control won’t greatly benefit humanity on balance — it’ll harm us. And for that reason, forcing artists to be complicit in the creation of those technologies is inflicting something terrible on them: moral injury. Moral injury is what happens when you feel you’ve been forced to violate your own values. Psychiatrists coined the term in the 1990s after observing Vietnam-era veterans who’d had to carry out orders — like dropping bombs and killing civilians — that completely contradicted the urgings of their conscience. Moral injury can also apply to doctors who have to ration care, teachers who have to implement punitive behavior-management programs, and anyone else who’s been forced to act contrary to their principles. In recent years, a swell of research has shown that people who’ve experienced moral injury often carry a sense of shame that can lead to severe anxiety and depression.Maybe you’re thinking that this psychological condition sounds a world away from AI-generated art — that having your images or words turned into fodder for AI couldn’t possibly trigger moral injury. I would argue, though, that this is exactly what’s happening for many artists who are seeing their work sucked up to enable a project they fundamentally oppose, even if they don’t yet know the term to describe it. Framing their objection in terms of moral injury would be more effective. Unlike other arguments, it challenges the AI boosters’ core narrative that everyone should support AI innovation because it’s essential to progress. Why AI art is more than just fair use or remixing By now, you’ve probably heard people argue that trying to rein in AI development means you’re anti-progress, like the Luddites who fought against power looms at the dawn of the industrial revolution or the people who said photographers should be barred from taking your likeness in public without your consent when the camera was first invented.Some folks point out that as recently as the 1990s, many people saw remixing music or sharing files on Napster as progressive and actually considered it illiberal to insist on intellectual property rights. In their view, music should be a public good — so why not art and books?To unpack this, let’s start with the Luddites, so often invoked in discussions about AI these days. Despite the popular narrative we’ve been fed, the Luddites were not anti-progress or even anti-technology. What they opposed was the way factory owners used the new machines: not as tools that could make it easier for skilled workers to do their jobs but as a means to fire and replace them with low-skilled, low-paid child laborers who’d produce cheap, low-quality cloth. The owners were using the tech to immiserate the working class while growing their own profit margins. That is what the Luddites opposed. And they were right to oppose it because it matters whether tech is used to make all classes of people better off or to empower an already-powerful minority at others’ expense. Narrowly tailored AI — tools built for specific purposes, such as enabling scientists to discover new drugs — stands to be a huge net benefit to humanity as a whole, and we should cheer it on. But we have no compelling reason to believe the same is true of the race to build AGI — artificial general intelligence, a hypothetical system that can match or exceed human problem-solving abilities across many domains. In fact, those racing to build it, like Altman, will be the first to tell you that it might break the world’s economic system or even lead to human extinction. They cannot argue in good faith, then, that intellectual property should be swept aside because the race to AGI will be a huge net benefit to humanity. They might hope it will benefit us, but they themselves say it could easily doom us instead. But what about the argument that shoveling the whole internet into AI is fair use?That ignores the fact that when you take something from someone else, it really matters exactly what you do with it. Under the fair use principle, the purpose and character of the use is key. Is it for commercial use? Or not-for-profit? Will it harm the original owner?Think about the people who sought to limit photographers’ rights in the 1800s, arguing that they can’t just take your photo without permission. Now, it’s true that the courts ruled that I can take a photo with you in it even if you didn’t explicitly consent. But that doesn’t mean the courts allowed any and all uses of your likeness. I cannot, for example, legally take that photo of you and non-consensually turn it into pornography. Pornography — not music remixing or file sharing — is the right analogy here. Because AI art isn’t just about taking something from artists; it’s about transforming it into something many of them detest since they believe it contributes to the “enshittification” of the world, even if it won’t literally end the world. That brings us back to the idea of moral injury. Currently, as artists grasp for language in which to lodge their grievance, they are naturally using the language that is familiar to them: creativity and originality, intellectual property and copyright law. But that language gestures toward something deeper. The reason we value creativity and originality in the first place is because we believe they’re an essential part of human agency. And there is a growing sense that AI is eroding that agency, whether by homogenizing our tastes, addicting us to AI companions, or tricking us into surrendering our capacity for ethical decision-making.Forcing artists to be complicit in that project — a project they find morally detestable because it strikes at the core of who we are as human beings — is to inflict moral injury on them. That argument can’t be easily dismissed with claims of “fair use” or “benefitting humanity.” And it’s the argument that artists should make loud and clear. See More:
    0 Σχόλια 0 Μοιράστηκε 85 Views
  • WWW.DAILYSTAR.CO.UK
    GTA 6 fan plans 'ultimate' Vice City themed launch party for game's release date
    GTA 6 still doesn't have a release date, but that hasn't stopped one fan from preparing the "Ultimate GTA VI launch party" for when it does arrive – here's what guests can look forward toTech12:18, 16 Apr 2025How much longer will we have to wait?(Image: Rockstar Games/AFP via Getty Ima)Grand Theft Auto 6 is still hopefully coming this year, and while some are concerned it's been delayed, Take-Two's CEO suggested that the company knows what it's doing when it comes to marketing the biggest game in history.While one leaker yesterday suggested we can expect to be playing in November, a release window of "Fall 2025" is enough for one super fan who has started putting together the ultimate GTA 6-themed party for launch.‌From DIY decorations to activities and even food suggestions, they've planned out everything – we just wish Rockstar would give us all a date so they can start sending out RSVPs.Posting on Reddit, superfan alexcd421 said, "Hey fellow criminals of Los Santos (and soon Vice City)!"Article continues below"With GTA VI finally arriving in Fall 2025, I'm planning an epic launch night party with close friends and family. I want to go all out with a Miami/GTA-themed celebration on a budget, and thought I'd share my plans in case anyone else wants to throw their own GTA VI launch party. Plus, I'd love to hear your additional ideas!"The poster's list of suggestions ties into a "Miami/Florida tropical" vibe, like Hawaiian shirts for guests, cuban sandwiches for food, and stuffed animals like dolphins and flamingos.Then there's the criminal side of things, taken care of with fun printouts of heist plans, a mugshot photobooth, fake ankle monitors and stacks of (fake) cash.‌GTA 6 is likely to be the biggest entertainment release in history(Image: Rockstar Games/AFP via Getty Ima)Better yet, alexcd421 has even provided print-outs for the in-universe cans of fizzy drink so you can cover up the existing labels and enjoy an ice-cold can of Sprunk or eCola.‌Fans have been responding with their own ideas, and some are equally fantastic."Get some configurable light strips set it from ceiling to the floor make it go from orange to purple or something like the artwork," one suggested, while another suggested doing the same but for police lights."My guy doing some real work," one commenter cheered, while another said "Amazing list! Your party is gonna be amazing."Article continues belowHow will you celebrate when GTA 6 finally launches? I'll probably be ordering food, cracking open an energy drink, and enjoying not having to write about the release date for a bit...For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.‌‌‌
    0 Σχόλια 0 Μοιράστηκε 78 Views
  • METRO.CO.UK
    Nintendo Switch 2 videos show exactly how strong the magnetic Joy-Cons are
    Nintendo Switch 2 videos show exactly how strong the magnetic Joy-Cons are Adam Starkey Published April 16, 2025 1:04pm Updated April 16, 2025 1:04pm Is the force strong with this one? (Nintendo) The strength of the Switch 2 Joy-Cons have been tested and it is possible to pull them off the console with enough force. As a hybrid device designed for children, the Nintendo Switch is more vulnerable to wear and tear than your average console. The design of the original Switch wasn’t entirely successful as a child-proof piece of kit, with the console subject to Joy-Con drift and occasional mishaps with the slide-on rails. The Switch 2 is hoping to address the latter through its new magnetic Joy-Cons, and with many people getting to go hands-on with the console at the Switch 2 Experience events around the world, several videos have shown off exactly how strong they are. One video shows how the Joy-Cons remain attached even when the console is dangled by a single controller, so it won’t easily slide off if you decide to sling it across the room. While you’re supposed to detach the controllers by first pressing buttons on the rear of the Joy-Con, a video on MinnMax shows you can pull them off through pure force if you want. It appears to take some pulling though, so it’s not just going to fall off at random. The big question is whether these magnetic connections will lose strength over time, which might make playing in handheld mode slightly unreliable, if they can be easily pulled off in the heat of a boss battle. As revealed earlier this month in a Q&A, the magnetic Joy-Cons were first proposed for the original Switch, but the late Nintendo president Satoru Iwata rejected the idea. Speaking about the origins of the Switch 2’s Joy-Cons, the console’s producer Kouichi Kawamoto said: ‘Originally when we were developing Switch, there was the idea to attach the Joy-Con controllers to the console with magnets. Using magnets, you’d be able to attach and detach the Joy-Con controllers right away, making it easier to share them.’ More Trending He added: ‘I took the prototypes to Iwata-san, the company’s president at the time, for feedback. But unfortunately, the Joy-Con controllers would wobble when attached to the console using magnets due to the weak connection. ‘We decided to scrap the idea as we were concerned it would make customers uneasy about using the console. Instead, we adopted the rail system for Switch, which allowed for a more stable attachment. But we always wished we could make it easier to attach and detach the controllers.’ The decision to implement them for the Switch 2 came after ‘a lot of trial and error’ with the Technology Development Department, adding: ‘And we were finally able to attach it firmly and remove it easily with just a light press of the release button.’ The Switch 2 is set to launch on June 5, 2025, alongside Mario Kart World. UK pre-orders are already live, but pre-orders in the US and Canada have been delayed due to the recently announced tariffs. It isn’t an easy pull (Nintendo) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. Arrow MORE: Games Inbox: How long will the PS5 console last? GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Σχόλια 0 Μοιράστηκε 76 Views
  • GIZMODO.COM
    Trump Admin Has a New Target: People Who Aggressively Believe in Nothing
    Over the past decade, there’s been a lot of talk about “ideological extremism” on both the left and right, and the government has often claimed that warped political beliefs are encouraging Americans to commit violent acts. However, under the new Trump administration, the government now seems prepared to go after people who don’t believe in anything at all. Independent journalist Ken Klippenstein writes that the government has a new target in its war on extremism: nihilists—more specifically, Nihilist Violent Extremists, or NVEs. The government has reportedly come up with this designation as a kind of catchall for the culprits behind various violent incidents, and the term has shown up in several recent court cases. Who is a true NVE? That’s a good question, and the answer is: anybody. Klippenstein aptly notes that the term has a conveniently loose definition that could be applied to all sorts of different groups that the government considers undesirable. He writes that the NVE term… …has the beauty of being elastic enough to apply to individuals and groups who are the focus of the administration’s war on all kinds of Americans. Nihilism also avoids all of the rusty and problematic words of the past: subversive, dissident, insurrectionist, revolutionary, or even “anti-government” (the Biden term). Klippenstein writes that the term was recently used in the legal proceedings of Nikita Casap, a teen from Wisconsin who was arrested in February and charged with murdering his parents. Law enforcement claims Casap also planned to assassinate President Trump to spur a civil war in the U.S. But, hypothetically, the NVE term could also potentially be applied to people like Luigi Mangione, the young man accused of assassinating a UnitedHealthcare CEO, or the gaggle of people of who have recently been arrested for vandalizing and firebombing Teslas, or the Zizians. The road to this new low in law enforcement terminology has been long. While “ideological extremism” has always existed in the U.S., it became a political (and, eventually, policy) issue in the modern era during the Clinton years, when incidents like Ruby Ridge and the Oklahoma City bombing brought fears of the rightwing militia movement into the mainstream. During the Bush years, 9/11 spurred a war on Islamist extremism—both in the U.S. and all over the world. Then, during the Biden years, the specter of January 6th encouraged the government to declare a war on “domestic terrorism.” In short, the government has always found reasons to justify its federal police powers, though few of them have ever been as sloppily constructed as the current government’s newest fearmongering buzzword.
    0 Σχόλια 0 Μοιράστηκε 81 Views
  • WWW.ARCHDAILY.COM
    Beyond the Walls: 21 Contemporary Interventions in Castles and Fortresses
    Castles and fortresses often rise from strategic, commanding positions when standing alone or integrated into urban and rural landscapes. From above, they overlook the city, bearing in their imposing structures the weight of history. With their original functions now limited to contemplation, these spaces have been undergoing revaluation and reintegration into everyday urban life. Once symbols of military or political power, they are now taking on new roles through contemporary interventions that engage with their heritage without erasing their past.
    0 Σχόλια 0 Μοιράστηκε 93 Views
  • WWW.TECHNEWSWORLD.COM
    AI Raises Bigger Concerns for Students Than Teachers, Admins: Study
    As AI continues reshaping the classroom, a new multinational study reveals that students are more worried about its impact than teachers or administrators — raising fresh concerns about critical thinking, misuse, and the future of learning. The post AI Raises Bigger Concerns for Students Than Teachers, Admins: Study appeared first on TechNewsWorld.
    0 Σχόλια 0 Μοιράστηκε 79 Views
  • WWW.POPSCI.COM
    One Montessori-inspired app is making screen time better for kids
    Screen time is kind of tough to avoid now. Your kid sees you on your phone, and they want their own little screen to play with, but that doesn’t mean you have to drop the internet in their laps. If you want to turn screentime into a safe, fun learning opportunity, check out Pok Pok. This calming Montessori-inspired app helps kids aged 2-8 calm down, learn, and grow, and it’s on sale.  A lifetime subscription is only $49.99 (reg. $250).  How does Pok Pok work? Pok Pok is built around the idea that screens can actually support healthy development if they’re used the right way. Instead of loud ads, fast-paced levels, or flashy animations, Pok Pok offers a quiet, gentle environment where kids can explore at their own pace. There’s no winning or losing, just creative open-ended play that encourages experimentation and curiosity. Every activity is designed with child development in mind, offering foundational experiences that promote cognitive, emotional, and social growth. Whether your child wants to dress up a character, build shapes, solve puzzles, or explore space, there’s a “digital toy” inside Pok Pok that makes learning feel like play. It’s not just the kids who benefit, either. Parents get peace of mind knowing the app is ad-free, COPPA-compliant, and designed in collaboration with early childhood educators. Pok Pok also keeps things fresh by regularly updating with new seasonal content, cultural moments, and creative surprises.  So why get a lifetime subscription when your kid’s going to outgrow the app in a few years? You can install Pok Pok on up to 10 devices. That means if your child gets a younger sibling, they can both play with Pok Pok. It also means that if a tablet gets dipped into a bowl of cereal, you can just install the app on a new (maybe hopefully waterproof) device.  Use code SAVE10 by April 27 at 11:59 p.m. PT to get a Pok Pok lifetime subscription on sale for $49.99.  StackSocial prices subject to change.  Pok Pok: Lifetime Subscription – $49.99 See Deal What makes this deal special Pok Pok is an educational alternative to standard apps for kids. It’s calming, entertaining, and helps kids learn during some really important years. It also lasts for life and can be installed on multiple devices for siblings or in case a device breaks. 
    0 Σχόλια 0 Μοιράστηκε 72 Views
  • WWW.SCIENCENEWS.ORG
    A messed-up body clock could be a bigger problem than lack of sleep
    On the eve of Daylight Saving Time, I flew home to Vermont from California. Crossing several time zones, I arrived near midnight. At 2 a.m., the clock jumped ahead an hour, leaving me discombobulated. “How messed up am I?” I asked sleep researcher and evolutionary anthropologist David Samson days later. Jet lag can make people feel moody and hungry at weird times, but my extreme state probably masked chronic sleep dysregulation, he told me. For most of human history, people woke with the sun and slept with the stars. Environmental cues like light and temperature synchronized the body’s clock, or circadian rhythm, to the day-night cycle. Nowadays, many of us spend more time indoors than out, where we bathe in artificial light and temperatures set for optimal comfort. Sign up for our newsletter We summarize the week's scientific breakthroughs every Thursday.
    0 Σχόλια 0 Μοιράστηκε 68 Views