• ARCHINECT.COM
    Five ways construction contracts can be re-designed for tariff uncertainty
    As Archinect has reported in depth over recent months, the new Trump Administration’s unpredictable wielding of tariffs on goods entering the United States has direct consequences for construction projects.  For architects acting as contract administrators between clients and contractors, our article from February emphasized the importance of reviewing language that addresses taxes and tariffs to determine which party is financially responsible for material price increases. To that end, just last week, we carried commentary from the Associated Builders and Contractors who noted that tariffs were driving rapid increases in construction material prices. Related on Archinect: Who foots the bill if tariffs raise a project’s construction costs? Image credit: Ricardo Gomez Angel/UnsplashLate last month, meanwhile, Archinect spoke with Phillip Ross, a partner at New York accounting firm Anchin, and leader of the company’s architecture, engineering, and construction division, for insights o...
    0 Commentarios 0 Acciones 72 Views
  • Star Wars Zero Company Announced by Respawn Entertainment and Bit Reactor
    After images and the official name leaked weeks ago, Respawn Entertainment and Bit Reactor have officially announced Star Wars Zero Company. The single-player turn-based tactics title lacks a release date or platforms, but a reveal is scheduled for Star Wars Celebration Japan 2025 on April 19th. A developer panel hosted by Bit Reactor provides the world’s first look at the title, which could include gameplay. In the meantime, the first key art is available and highlights what could be the player’s squad. There’s a Jedi, a Clone Trooper, and a Mandalorian while a man, potentially their commander, sits on a ledge, peering at a hologram of a Republican Droid Trooper. Perhaps it’s set between Episode 2: Attack of the Clones and Episode 3: Revenge of the Sith, when the Republican Army and Trade Federation waged war on multiple fronts. The leaked screenshots indicated XCOM-like gameplay, from cover-based shooting to hit chances and skills like Overwatch. It won’t be long before we receive more details, so stay tuned for official updates.
    0 Commentarios 0 Acciones 86 Views
  • WWW.SMITHSONIANMAG.COM
    Crows May Grasp Basic Geometry: Study Finds the Brainy Birds Can Tell the Difference Between Shapes
    Crows May Grasp Basic Geometry: Study Finds the Brainy Birds Can Tell the Difference Between Shapes Scientists tested crows on their ability to recognize “geometric regularity,” a skill previously assumed to be unique to humans Carrion crows (Corvus corone) can tell the difference between geometric shapes, according to new research. Pexels Crows are arguably among the smartest creatures on the planet, possessing some cognitive abilities that rival those of 5- to 7-year-old human children. Now, a new study adds basic geometry to the list of subjects these brainy birds seem to be able to master. In a paper published in the journal Science Advances last week, researchers report that carrion crows can recognize “geometric regularity,” meaning they may discern traits like length of sides, parallel lines, right angles and symmetry. In the study, they could tell the difference between shapes like stars, crescents and squares, as well as between squares and irregular figures with four sides. Researchers once thought this ability was unique to humans. But the findings suggest that’s not true—and they hint at the possibility that other species may be capable of similar feats, too. “The crows show a sort of intuitive, strictly perceptual recognition of geometric properties,” says Giorgio Vallortigara, a neuroscientist at the University of Trento in Italy who was not involved with the work, to Scientific American’s Gayoung Lee. To test the birds’ mathematical abilities, scientists in Germany placed two male carrion crows (Corvus corone) in front of a digital screen in a laboratory. They displayed six shapes on the screen, then trained the birds to peck at the outlier—the one that looked different from all the others. Whenever the birds chose correctly, researchers rewarded them with a tasty snack, either a mealworm or a bird seed pellet. At first, the researchers made the outliers obvious—such as one flower amid five crescents, reports NPR’s Nell Greenfieldboyce. But as the birds got more comfortable with the task at hand, the team made the experiment increasingly challenging. They showed the crows similar-looking squares, parallelograms and other irregular four-sided figures. Even as the game got more difficult, the crows could still pick out the outlier. They continued correctly pecking at the outlier, even after the scientists stopped giving them treats. Researchers rewarded the birds with a tasty treat—like a mealworm—when they correctly pecked at the outlier shape on a digital screen. Schmidbauer et al. / Science Advances, 2025 Why would crows need to be able to tell shapes apart? Researchers don’t know for sure. But they suspect this ability may help them with navigation and orientation as they fly around, they write in the paper. The birds may also have developed this ability to help them forage for food or identify other individual crows—including mates—based on their facial features. “All these capabilities, at the end of the day, from a biological point of view, have evolved because they provide a survival advantage or a reproductive advantage,” study senior author Andreas Nieder, a neurophysiologist at the University of Tübingen in Germany, tells Scientific American. In the future, researchers hope to investigate which areas of the birds’ brains are helping them excel at geometry. Birds don’t have a cerebral cortex—at least, not in the same way that humans do. But for us, that part of the brain is responsible for thinking and other complex functions. Crows still have these abilities, so the researchers posit there must be something else going on inside their heads. “Obviously, evolution found two different ways of giving rise to behaviorally flexible animals,” Nieder says to Scientific American. The team also hopes future research will probe the “geometric regularity” abilities of other species. In the past, researchers have run similar experiments with baboons. But even after extensive training, the primates didn’t seem to share our mathematical understanding. Still, scientists say it’s unlikely that humans and crows are the only animals with this ability. “It’s just now opening this field of investigation,” Nieder tells NPR. Crows are the whiz kids of the animal kingdom. Past research has found that they can vocally count up to four, distinguish between human voices and faces, and grasp a pattern-forming concept thought to be unique to humans. Some species can build tools for future use, while others are likely aware of their own body size. These and other examples of animals’ intelligence are upending the long-held notion that humans are the only species capable of high-level cognitive functioning. “Humans do not have a monopoly on skills such as numerical thinking, abstraction, tool manufacture and planning ahead,” Heather Williams, a biologist at Williams College, told CNN’s Scottie Andrew last year. “No one should be surprised that crows are ‘smart.’” Get the latest stories in your inbox every weekday.
    0 Commentarios 0 Acciones 112 Views
  • VENTUREBEAT.COM
    OpenAI slashes prices for GPT-4.1, igniting AI price war among tech giants
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI released GPT-4.1 this morning, directly challenging competitors Anthropic, Google and xAI.By ramping up its coding and context-handling capabilities to a whopping one-million-token window and aggressively cutting API prices, GPT-4.1 is positioning itself as the go-to generative AI model. If you’re managing budgets or crafting code at scale, this pricing shake-up might just make your quarter. Performance upgrades at Costco prices The new GPT-4.1 series boasts serious upgrades, including a 54.6% win rate on the SWE-bench coding benchmark, marking a considerable leap from prior versions. But the buzz isn’t just about better benchmarks. Real-world tests by Qodo.ai on actual GitHub pull requests showed GPT-4.1 beating Anthropic’s Claude 3.7 Sonnet in 54.9% of cases, primarily thanks to fewer false positives and more precise, relevant code suggestions.. OpenAI’s new pricing structure—openly targeting affordability—might finally tip the scales for teams wary of runaway AI expenses: ModelInput cost (per Mtok)Output cost (per Mtok)GPT-4.1$2.00$8.00GPT-4.1 mini$0.40$1.60GPT-4.1 nano$0.10$0.40 The standout here? That generous 75% caching discount, effectively incentivizing developers to optimize prompt reuse—particularly beneficial for iterative coding and conversational agents. Feeling the heat Anthropic’s Claude models have established their footing by balancing power and cost. But GPT-4.1’s bold pricing undercuts their market position significantly: ModelInput cost (per Mtok)Output cost (per Mtok)Claude 3.7 Sonnet$3.00$15.00Claude 3.5 Haiku$0.80$4.00Claude 3 Opus$15.00$75.00 Anthropic still offers compelling caching discounts (up to 90% in some scenarios), but GPT-4.1’s base pricing advantage and developer-centric caching improvements position OpenAI as a budget-friendlier choice—particularly appealing for startups and smaller teams. Gemini’s pricing complexity is becoming increasingly notorious in developer circles. According to Prompt Shield’s Gemini’s tiered structure—especially with the powerful 2.5 Pro variant—can quickly escalate into financial nightmares due to surcharges for lengthy inputs and outputs that double past certain context thresholds: ModelInput cost (per Mtok)Output cost (per Mtok)Gemini 2.5 Pro ≤200k$1.25$10.00Gemini 2.5 Pro >200k$2.50$15.00Gemini 2.0 Flash$0.10$0.40 Moreover, Gemini lacks an automatic billing shutdown, which Prompt Shield says exposes developers to Denial-of-Wallet attacks—malicious requests designed to deliberately inflate your cloud bill, which Gemini’s current safeguards don’t fully mitigate. GPT-4.1’s predictable, no-surprise pricing seems to be a strategic counter to Gemini’s complexity and hidden risks. Context is king xAI’s Grok series, championed by Elon Musk, recently unveiled its API pricing for its latest models last week: ModelInput Cost per MtokOutput (per Mtok)Grok-3$3.00$15.00Grok-3 Fast-Beta$5.00$25.00Grok-3 Mini-Fast$0.60$4.00 One complicating factor with Grok has been its context window. Musk touted that Grok 3 could handle 1 million tokens (similar to GPT-4.1’s claim), but the current API actually maxes out at 131k tokens​, well short of that promise. This discrepancy drew some criticism from users on X, pointing to a bit of overzealous marketing on xAI’s part​.  For developers evaluating Grok vs. GPT-4.1, this is notable: GPT-4.1 offers the full 1M context as advertised, whereas Grok’s API might not (at least at launch). In terms of pricing transparency, xAI’s model is straightforward on paper, but the limitations and the need to pay more for “fast” service show the trade-offs of a smaller player trying to compete with industry giants. Windsurf bets big on GPT-4.1’s developer appeal Demonstrating high confidence in GPT-4.1’s practical advantages, Windsurf—the AI-powered IDE—has offered an unprecedented free, unlimited GPT-4.1 trial for a week. This isn’t mere generosity; it’s a strategic gamble that once developers experience GPT-4.1’s capabilities and cost savings firsthand, reverting to pricier or less capable models will be a tough sell. A new era of competitive AI pricing OpenAI’s GPT-4.1 isn’t just shaking up the pricing game, it’s potentially setting new standards for the AI development community. With precise, reliable outputs verified by external benchmarks, simple pricing transparency, and built-in protections against runaway costs, GPT-4.1 makes a persuasive case for being the default choice in closed-model APIs. Developers should brace themselves—not just for cheaper AI, but for the domino effect this pricing revolution might trigger as Anthropic, Google, and xAI scramble to keep pace. For teams previously limited by cost, complexity, or both, GPT-4.1 might just be the catalyst for a new wave of AI-powered innovation. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarios 0 Acciones 81 Views
  • WWW.THEVERGE.COM
    Apple’s complicated plan to improve its AI while protecting privacy
    Apple says it’s found a way to make its AI models better without training on its users’ data or even copying it from their iPhones and Macs. In a blog post first reported on by Bloomberg, the company outlined its plans to have devices compare a synthetic dataset to samples of recent emails or messages from users who have opted into its Device Analytics program. Apple devices will be able to determine which synthetic inputs are closest to real samples, which they will relay to the company by sending “only a signal indicating which of the variants is closest to the sampled data.” That way, according to Apple, it doesn’t access user data, and the data never leaves the device. Apple will then use the most frequently picked fake samples to improve its AI text outputs, such as email summaries. Currently, Apple trains its AI models on synthetic data only, potentially resulting in less helpful responses, according to Bloomberg’s Mark Gurman. Apple has struggled with the launch of its flagship Apple Intelligence features, as it pushed back the launch of some capabilities and replaced the head of its Siri team. But now, Apple is trying to turn things around by introducing its new AI training system in a beta version of iOS and iPadOS 18.5 and macOS 15.5, according to Gurman. Apple has been talking up its use of a method called differential privacy to keep user data private since at least 2016 with the launch of iOS 10 and has already used it to improve the AI-powered Genmoji feature. This also applies to the company’s new AI training plans as well, as Apple says that introducing randomized information into a broader dataset will help prevent it from linking data to any one person.
    0 Commentarios 0 Acciones 70 Views
  • WWW.MARKTECHPOST.COM
    Multimodal Models Don’t Need Late Fusion: Apple Researchers Show Early-Fusion Architectures are more Scalable, Efficient, and Modality-Agnostic
    Multimodal artificial intelligence faces fundamental challenges in effectively integrating and processing diverse data types simultaneously. Current methodologies predominantly rely on late-fusion strategies, where separately pre-trained unimodal models are grafted together, such as attaching vision encoders to language models. This approach, while convenient, raises critical questions about optimality for true multimodal understanding. The inherent biases from unimodal pre-training potentially limit the model’s ability to capture essential cross-modality dependencies. Also, scaling these composite systems introduces significant complexity, as each component brings its hyperparameters, pre-training requirements, and distinct scaling properties. The allocation of computational resources across modalities becomes increasingly difficult with this rigid architectural paradigm, hampering efficient scaling and potentially limiting performance in tasks requiring deep multimodal reasoning and representation learning. Researchers have explored various approaches to multimodal integration, with late-fusion strategies dominating current implementations. These methods connect pre-trained vision encoders with language models, establishing a well-understood paradigm with established best practices. Early-fusion models, which combine modalities at earlier processing stages, remain comparatively unexplored despite their potential advantages. Native multimodal models trained from scratch on all modalities simultaneously represent another approach. However, some rely on pre-trained image tokenizers to convert visual data into discrete tokens compatible with text vocabularies. Mixture of Experts (MoE) architectures have been extensively studied for language models to enable efficient parameter scaling, but their application to multimodal systems remains limited. While scaling laws have been well-established for unimodal models, predicting performance improvements based on compute resources, few studies have investigated these relationships in truly multimodal systems, particularly those using early-fusion architectures processing raw inputs. Researchers from Sorbonne University and Apple investigate scaling properties of native multimodal models trained from scratch on multimodal data, challenging conventional wisdom about architectural choices. By comparing early-fusion models, which process raw multimodal inputs directly against traditional late-fusion approaches, researchers demonstrate that late fusion offers no inherent advantage when both architectures are trained from scratch. Contrary to current practices, early-fusion models prove more efficient and easier to scale, following scaling laws similar to language models with slight variations in scaling coefficients across modalities and datasets. Analysis reveals optimal performance occurs when model parameters and training tokens are scaled in roughly equal proportions, with findings generalizing across diverse multimodal training mixtures. Recognizing the heterogeneous nature of multimodal data, the research extends to MoE architectures, enabling dynamic parameter specialization across modalities in a symmetric and parallel manner. This approach yields significant performance improvements and faster convergence compared to standard architectures, with scaling laws indicating training tokens should be prioritized over active parameters, a pattern distinct from dense models due to the higher total parameter count in sparse models. The architectural investigation reveals several key findings about multimodal model scaling and design. Native early-fusion and late-fusion architectures perform comparably when trained from scratch, with early-fusion models showing slight advantages at lower compute budgets. Scaling laws analysis confirms that compute-optimal models for both architectures perform similarly as compute budgets increase. Importantly, native multimodal models (NMMs) demonstrate scaling properties resembling text-only language models, with scaling exponents varying slightly depending on target data types and training mixtures. Compute-optimal late-fusion models require a higher parameters-to-data ratio compared to their early-fusion counterparts, indicating different resource allocation patterns. Sparse architectures using Mixture of Experts significantly benefit early-fusion NMMs, showing substantial improvements over dense models at equivalent inference costs while implicitly learning modality-specific weights. In addition to this, the compute-optimal sparse models increasingly prioritize scaling training tokens over active parameters as compute budgets grow. Notably, modality-agnostic routing in sparse mixtures consistently outperforms modality-aware routing approaches, challenging intuitions about explicit modality specialization. The study presents comprehensive scaling experiments with NMMs across various architectural configurations. Researchers trained models ranging from 0.3 billion to 4 billion active parameters, maintaining consistent depth while scaling width to systematically evaluate performance patterns. The training methodology follows a structured approach with variable warm-up periods—1,000 steps for smaller token budgets and 5,000 steps for larger budgets—followed by constant learning rate training and a cooling-down phase using an inverse square root scheduler comprising 20% of the constant learning rate duration. To robustly estimate scaling coefficients in their predictive equations, researchers employed the L-BFGS optimization algorithm paired with Huber loss (using δ = 10^-3), conducting thorough grid searches across initialization ranges.  Comparative analysis reveals significant performance advantages of sparse architectures over dense models for multimodal processing. When compared at equivalent inference costs, MoE models consistently outperform their dense counterparts, with this advantage becoming particularly pronounced for smaller model sizes, suggesting enhanced capability to handle heterogeneous data through modality specialization. As model scale increases, this performance gap gradually narrows. Scaling laws analysis demonstrates that sparse early-fusion models follow similar power law relationships to dense models with comparable scaling exponents (-0.047 vs -0.049), but with a smaller multiplicative constant (26.287 vs 29.574), indicating lower overall loss.  This research demonstrates that native multimodal models follow scaling patterns similar to language models, challenging conventional architectural assumptions. Early-fusion and late-fusion approaches perform comparably when trained from scratch, with early-fusion showing advantages at lower compute budgets while being more efficient to train. Sparse architectures using Mixture of Experts naturally develop modality-specific specialization, significantly improving performance without increasing inference costs. These findings suggest that unified, early-fusion architectures with dynamic parameter allocation represent a promising direction for efficient multimodal AI systems that can effectively process heterogeneous data. Check out Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Step by Step Coding Guide to Build a Neural Collaborative Filtering (NCF) Recommendation System with PyTorchMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models)Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/MMSearch-R1: End-to-End Reinforcement Learning for Active Image Search in LMMsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Anthropic’s Evaluation of Chain-of-Thought Faithfulness: Investigating Hidden Reasoning, Reward Hacks, and the Limitations of Verbal AI Transparency in Reasoning Models
    0 Commentarios 0 Acciones 90 Views
  • TOWARDSAI.NET
    Having Streamlit Superpowers: The Best GPT-4 Prompts For Guaranteed Data Visuals
    Having Streamlit Superpowers: The Best GPT-4 Prompts For Guaranteed Data Visuals 0 like April 14, 2025 Share this post Last Updated on April 14, 2025 by Editorial Team Author(s): John Loewen, PhD Originally published on Towards AI. No-fuss prompting for error-free Python Streamlit data viz code How easy is it to create data visuals with GPT-4 and the Python Streamlit library? Way easier than I ever expected! A little bit of fancy footwork with a good dataset and we can rattle off chart after chart (even maps) giving us in-depth analysis and stories for our data set. What kind of charts and maps you ask? We can prompt GPT-4 for: Time-series analysisComparative analysisDistribution analysisInteractive global choropleth map Let me take you through the simple steps to accomplish this — with a minimal amount of fuss. Personally, I like to do all of my chart generation using publicly accessible global development data. For this example, let’s use the UNESCO global democracy index dataset that can be downloaded HERE. Once we have a CSV file from the UNESCO site in our hands, we can upload this file to GPT-4 and prompt away: Prompting GPT-4o to give us a description of the field in the Global Democracy Index dataset Response from GPT-4: This dataset contains 6 fields across 32,518 rows. Here’s a concise overview of each field: Entity — Name of the country or region.Code — ISO 3-letter country code (some entries are missing).Year — The year of the observation.Electoral democracy index — A numerical score representing the level of electoral… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post
    0 Commentarios 0 Acciones 94 Views
  • WWW.IGN.COM
    The New Dell Tower Plus Gaming PC with GeForce RTX 4070 Ti Super GPU Drops to $1,650
    Starting this week, Dell is offering a Dell Tower Plus gaming PC equipped with a GeForce RTX 4070 Ti Super graphics card for only $1,649.99 with free shipping. This PC can comfortably run games at up to 4K resolution, and it's considerably less expensive than a RTX 5070 Ti prebuilt, which would cost you well over $2,000 no matter where you buy it from.Dell Tower Plus RTX 4070 Ti Super Gaming PC for $1,649.99New Dell Tower Plus Intel Core Ultra 7 265 RTX 4070 Ti Super Gaming PC$1,649.99 at DellThe Dell Tower Plus is equipped with an Intel Core Ultra 7 265 CPU, GeForce RTX 4070 Ti Super GPU, 16GB of DDR5-5200MHz RAM, and a 1TB M.2 SSD. The Intel Core Ultra 7 265 processor has a max turbo frequency of 5.3GHz with 20 cores and a 36MB cache. You can choose the more powerful Ultra 7 265K model for an additional $100 and that will automatically upgrade your cooling from "Standard" to "Advanced" air cooling, which features a more robust tower heatsink fan. Skip the Core i9 CPU upgrade, since gaming performace is usually GPU-bound, especially at higher resolutions. The entire system is powered by a 750W 80PLUS Platinum power supply.The GeForce RTX 4070 Ti Super is a great card for gaming at any resolution, from 1080p all the way to 4K. At 1080p and 1440p you'll be able to achieve 144fps or beyond in most games, so it pairs best with FHD or QHD monitors with high refresh rates. 4K is a much more demanding resolution, but you should still be able to run most games at a consistent 60fps. Compared to the new Blackwell cards, the RTX 4070 Ti Super is significantly more powerful than the RTX 5070 and only about 10%-15% less powerful than the RTX 5070 Ti. The RTX 4070 Ti Super also has the same amount of VRAM as the RTX 5070 Ti and 5080, although it does use older generation GDDR6 instead of GDDR7.This costs hundreds less than an RTX 5070 Ti gaming PCAlthough the new RTX 5070 Ti GPU might be a bit faster, a prebuilt RTX 5070 Ti gaming PC will run you hundreds more than this deal. Right now, the least expensive gaming PC equipped with an RTX 5070 Ti GPU on Amazon runs for over $2,000, which means you're going to have to pay an extra $500+ for 10% improved performance.CyberPowerPC Gamer Xtreme VR Intel Core i9-14900F RTX 5070 Ti Gaming PC (32GB/2TB)$2,299.99 at AmazonCyberPowerPC Gamer Supreme AMD Ryzen 9 9900X RTX 5070 Ti Gaming PC (32GB/2TB)$2,329.99 at AmazonCyberPowerPC Gamer Supreme AMD Ryzen 7 9800X3D RTX 5070 Ti Gaming PC (32GB/2TB)$2,429.99 at AmazonCyberPowerPC Gamer Xtreme VR Intel Core i9-14900KF RTX 5070 Ti Gaming PC (32GB/2TB)$2,429.99 at AmazonCyberPowerPC Gamer Xtreme VR Intel Core Ultra 9 285 RTX 5070 Ti Gaming PC (32GB/2TB)$2,479.99 at AmazonCyberPowerPC Gamer Supreme AMD Ryzen 9 9900X3D RTX 5070 Ti Gaming PC (32GB/2TB)$2,619.99 at AmazonCyberPowerPC Gamer Supreme AMD Ryzen 9 9950X3D RTX 5070 Ti Gaming PC (64GB/4TB)$2,999.99 at AmazonSkytech Rampage AMD Ryzen 7 9700X RTX 5070 Ti Gaming PC (32GB/1TB)$2,399.99 at AmazonSkytech O11 Vision AMD Ryzen 7 7800X3D RTX 5070 Ti Gaming PC (32GB/1TB)$2,399.99 at AmazonWhy Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Commentarios 0 Acciones 83 Views
  • WWW.HOUSEBEAUTIFUL.COM
    5 Trends From Milan Design Week Designers Say You MUST Know
    Every April, the international design world descends on Milan for Salone del Mobile—and 2025 has proven once again why it's the event of the year for anyone who lives and breathes interiors. Officially running from April 8–13, Milan Design Week (as it’s commonly called) transforms the city into a mecca of creativity, with design debuts, immersive installations, and parties that go well into the night.But Milan Design Week is not just about appreciating new innovations and beautiful things—it presents a forecast of where the industry is headed. As one of the world’s leading furniture and interiors fairs, Salone is where trends are born. Each year brings breakthroughs in design, technology, and more, and we tapped some of the sharpest design minds in the business to find out what’s new and next.“This year’s Salone del Mobile in Milan made it clear: design is embracing boldness, personality, and thoughtfulness,” says Cintia Dixon, president of ASID New York Metro and CEO of Tlina Design. From palettes inspired by the Pantone Color of the Year and retro-futurism to mixed media and expressive, sculptural forms, Milan Design Week 2025 was a celebration of design at its most personal and imaginative. Interior designer Travis London, of Studio London Co., put it best: “The energy is electric, and the creativity is next-level.” So, without further ado, here are the top five trends from the 2025 edition of Milan Design Week that designers want you to know now. Related StoriesMocha Mousse Is the New NeutralCourtesy of MissoniMissoni takes over Principe Bar at Hotel Principe di Savoia in honor of the opening of the first boutique exclusively dedicated to the Missoni Home collection.Call it the "latte" effect—this year, everything at Salone seemed to be dipped in shades of Pantone’s Color of the Year: Mocha Mousse. From lighting to tableware, and across both contemporary and classic styles, warm mocha tones dominated the pavilions in Rho Fiera. “It had an incredibly welcoming effect,” noted Arlene Angard of Arlene Angard Interior Designs and Fine Art. Whether rendered in velvet, lacquer, or ceramics, these hues brought softness and sophistication to every corner.These earthy tones were often paired with sustainable materials, showing how design continues to deepen its connection to nature. “Mother Nature seemed to be the underlying inspiration,” Angard shared. Think recycled woods, tactile fabrics, and natural finishes with a modern twist. Dixon echoed this statement, noting, “Nature-inspired elements such as cork, bamboo, and pine are once again taking center stage, offering both warmth and eco-conscious appeal.” Related StoryEmphasis on Organic ShapesPaola PansiniThe Bocci apartment, featuring the new 141 lighting series.Designer Maria Lomanto of DesignGLXY is seeing nature’s influence on design taken one step further. From undulating wood furniture to glass that seemed to shimmer mid-melt, organic shapes were anything but static. Lomanto described the look as “Faux Nature+”—a hyper-natural movement that mimics, exaggerates, and even animates the forms we see in everything from furniture and lighting to accessories. “I saw this across all materialities—glass, wood, metal—whether from young brands using 3D printing or a 730-year-old Murano glass company,” she says. In other words, nature is not only back—it’s alive, and it’s “melting, dripping, waving in a breeze” through design in mesmerizing ways.Related StoryEmbracing History Through RetrofuturismAlejandro Ramirez OrozcoRetrofuturistic 1970s “Silver Lining” exhibition by Nilufar in collaboration with Fosbury Architecture.One of the most distinctive trends at Salone 2025 was a kind of love letter to the past that was reimagined for the future. Margo Fezza of Studio Fezza described it as “future vintage,” with pieces drawing inspiration from the late-19th to mid-20th century with a particular pull from Art Deco, Postmodernism, and even retro-futuristic Space Age design. “Some of my favorite recurring elements were floral Murano glass chandeliers, intricate lattice motifs, high-gloss burl wood, and anything in stainless steel—it always manages to feel super chic,” she shared. This revival was seen not only at the main fair of Salone but also throughout the galleries and curated exhibitions across Milan for its namesake design week. Fratelli Boffi, Soft Witness, Lemon Furniture, Unicoggetto, Jorge Suárez-Kilzi, and Zieta are a some of the amazingly talented manufacturers propelling the trend forward. The aesthetic isn’t just nostalgic—it’s a clever fusion of the past and future that feels fresh, collectible, and very now. Related StoryThe Rise of Fashion-Home CrossoversFrancois HalardThe Row installation at the Palazzo Belgioioso in the Quadrilatero della Moda district, including furniture pieces by Maison Baguès and Julian Schnabel.Luxury fashion houses are continuing to make waves in the interiors world. Veterans of the fashion-to-home pipeline, such as Ralph Lauren Home and Hermès, introduced new lines as always, but they were in new company with two fellow fashion brands now also turning to the home space. Louis Vuitton debuted their first-ever home line, while The Row made its own quiet-but-chic debut, comprising understated soft goods crafted from the world’s finest cashmere. High fashion’s pivot to home is reshaping what luxury looks like. “It’s no longer just about what you wear; it’s about how you live,” says London. Related StoryMixing Materials in Surprising WaysLorenzo BacciMoroso exhibit at Via Pontaccio 8/10, featuring the Clay chair with fire-glazed ceramic details by Zanellato/Bortotto.The days of matchy-matchy are over. Salone 2025 celebrated bold material juxtapositions. “Designers are pushing boundaries and adding depth to spaces through fresh, tactile pairings,” London says. Noting sightings of unexpected textures layered together in truly creative ways, like etched marble on statement walls or ceramics on the backs of chairs shown in the image above from Moroso. “Handcrafted accents brought individuality and soul to every room,” says Cinita Dixon. “The vibe is a blend of whimsy and sophistication—playful pieces meet refined details, all brimming with character.”Follow House Beautiful on Instagram and TikTok.
    0 Commentarios 0 Acciones 87 Views
  • 9TO5MAC.COM
    SongCapsule Quiz adds new rules and more artist playlists
    As we previously reviewed here on 9to5Mac, SongCapsule Quiz is a game inspired by iPod’s Music Quiz in which the player has to guess the name of the song, album or artist playing. With its latest update, SongCapsule Quiz now has new rules aimed at making the game more fun, as well as more artist playlists and a few other changes. What’s new in SongCapsule Quiz One of the biggest changes coming with version 1.2 of the game is that players now need to complete all the levels in a playlist before they can replay the previous ones. According to the developers, this should encourage players to keep progressing to earn more points, rather than replaying easier levels. Once you’ve unlocked all the levels, you can replay any you want. In addition, points are now only valid for each specific level and are not carried over to the next. And to ensure fairer competition, only the most difficult levels will count for points in Game Center. “Now, you can be sure that everyone is playing at the highest difficulty available when competing on the leaderboards.” There are also four new artist playlists to choose from: Britney Spears, Lana Del Rey, Sabrina Carpenter, and The Weeknd. SongCapsule Quiz now shows all playlists curated by Sorcererhat Playlists by default, even if you don’t have them in your music library. But you can hide the playlists you don’t want in the settings. Another important change relates to the app’s requirements. Whereas previously it relied on an Apple Music subscription, the game now works for anyone. And if you want to subscribe to Apple Music, you can do so right in the SongCapsule Quiz. You can download SongCapsule Quiz on the App Store. Although the game is free, it has a paid premium tier that unlocks more features such as challenging your friends in Apple’s Game Center. Gadgets I recommend: Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentarios 0 Acciones 77 Views