• Morphosis updates progress on UT Dallas performing arts design
    archinect.com
    Morphosis has updated the construction progress on the second phase of its work on theO'Donnell Athenaeum project for theUniversity of Texas at Dallas.Phase II of the new Richardson Campus academic/cultural precinct entails the Performance Hall and Music Building. In September, we covered the completion of Phase I'sCrow Museum of Asian Art. Phase III of the project entails addinga new car park structure and a yet-unnamed third museum building.
    0 Yorumlar ·0 hisse senetleri ·36 Views
  • Wikipedia picture of the day for February 8
    en.wikipedia.org
    The Lost World is a 1925 American silent fantasy giant monster adventure film, directed by Harry O. Hoyt and written by Marion Fairfax, adapted from Arthur Conan Doyle's 1912 novel of the same name. The film's premiere was at the Astor Theatre in New York City on February 8, 1925.Directed by Harry O. HoytRecently featured: The BeatlesOrange-lined triggerfishSojourner TruthArchiveMore featured pictures
    0 Yorumlar ·0 hisse senetleri ·39 Views
  • On this day: February 8
    en.wikipedia.org
    February 8: Feast day of Saint Josephine Bakhita (Catholicism); Military Foundation Day in North Korea (1948)South Carolina Highway Patrolmen before the Orangeburg Massacre421 Honorius declaredConstantiusIII to be his co-emperor of the Western Roman Empire.1250 Seventh Crusade: The Ayyubid Sultanate of Egypt defeated and captured King LouisIX of France at the Battle of Fariskur.1575 William of Orange founded Leiden University, the oldest university in the Netherlands.1960 The official groundbreaking of the Walk of Fame took place in Hollywood, Los Angeles, in California.1968 Law enforcement officers in Orangeburg, South Carolina (pictured), fired into a crowd of college students who were protesting segregation, killing three and injuring twenty-seven others.Jack Lemmon (b.1925)Valerie Thomas (b.1943)A.Chandranehru (d.2005)Mary Wilson (d.2021)More anniversaries: February 7February 8February 9ArchiveBy emailList of days of the yearAbout
    0 Yorumlar ·0 hisse senetleri ·39 Views
  • Apples ELEGNT framework could make home robots feel less like machines and more like companions
    venturebeat.com
    Apple introduces ELEGNT, allowing robots to convey intentions, emotions and attitudes through movement, rather than just functional tasks.Read More
    0 Yorumlar ·0 hisse senetleri ·44 Views
  • OpenAI responds to DeepSeek competition with detailed reasoning traces for o3-mini
    venturebeat.com
    By showing a more detailed version of the chain of thought of o3-mini, OpenAI is closing the gap with DeepSeek-R1.Read More
    0 Yorumlar ·0 hisse senetleri ·36 Views
  • GM will reportedly stop making gas-powered Chevy Blazer
    www.theverge.com
    The combustion engine version of the Chevy Blazer is reportedly being discontinued and will eventually only be offered as an EV, sources tell GM Authority. Both the Chevy Blazer EV and the gas-powered Chevy Blazer have been assembled at GMs Ramos Arizpe plant in Mexico, but with the gas Blazer now sunsetting with the 2025 model year, the facility is being retooled only to accommodate electric vehicles, GM Authority says.In an email to The Verge, Chevy representative Chad Lyons says that we have no portfolio changes to share and will not comment on speculation.Chevy makes EV and ICE versions of the Blazer mid-sized SUV and the Equinox, a smaller SUV. The models didnt only differentiate in powertrains from their gas counterparts but also in style and platform. Both the Blazer EV and Equinox EV share GMs tailored EV platform (formerly called Ultium) and are built in the same plant in Mexico alongside the Cadillac Optiq. Hondas Prologue, which is essentially a rebadged Blazer EV, is built in the same plant as well.As GM Authority notes, a discontinued gas-powered Blazer means Chevy no longer carries a two-row combustion SUV in North America. Chevy does still use the nameplate for a three-row SUV in China.Chevy sold 52,576 Blazers in the US in 2024, a drop from 94,599 sold in 2020 when the then 2019 model year redesign was introduced. GM also discontinued the Cadillac XT5 and XT6 this year.
    0 Yorumlar ·0 hisse senetleri ·41 Views
  • Google Calendar removed events like Pride and BHM because its holiday list wasnt sustainable’
    www.theverge.com
    Some Google Calendar users are angrily calling the company out after noticing that certain events like Pride month are no longer highlighted by default. Black History Month, Indigenous People Month, Jewish Heritage, Holocaust Remembrance Day, and Hispanic Heritage have also been removed, according toa Google product expert.One usercalled the move shamefuland said that the platform is being used to capitulate to fascism. Over the last few years, there have beencommentsand media reports complaining about the presence of the notes, but now theyre gone.Google confirmed its made changes to the default Calendar events, but with a different explanation about when and why. Heres Googles explanation of whats going on, provided by spokesperson Madison Cushman Veld:For over a decade weve worked with timeanddate.com to show public holidays and national observances in Google Calendar. Some years ago, the Calendar team started manually adding a broader set of cultural moments in a wide number of countries around the world. We got feedback that some other events and countries were missing and maintaining hundreds of moments manually and consistently globally wasnt scalable or sustainable. So in mid-2024 we returned to showing only public holidays and national observances from timeanddate.com globally, while allowing users to manually add other important moments.Timeanddate.com didnt reply to requests for comment.
    0 Yorumlar ·0 hisse senetleri ·39 Views
  • Optimizing Large Model Inference with Ladder Residual: Enhancing Tensor Parallelism through Communication-Computing Overlap
    www.marktechpost.com
    LLM inference is highly resource-intensive, requiring substantial memory and computational power. To address this, various model parallelism strategies distribute workloads across multiple GPUs, reducing memory constraints and speeding up inference. Tensor parallelism (TP) is a widely used technique that partitions weights and activations across GPUs, enabling them to process a single request collaboratively. Unlike data or pipeline parallelism, which processes independent data batches on separate devices, TP ensures efficient scaling by synchronizing intermediate activations across GPUs. However, this synchronization relies on blocking AllReduce operations, creating a communication bottleneck that can significantly slow down inference, sometimes contributing to nearly 38% of the total latency, even with high-speed interconnects like NVLink.Prior research has attempted to mitigate communication delays by overlapping computation with data transfer. Approaches such as writing fused GPU kernels for matrix operations and using domain-specific languages (DSLs) to optimize distributed workloads have shown promise. However, these techniques often require extensive low-level optimizations, making them difficult to implement in standard ML frameworks like PyTorch and JAX. Additionally, given the rapid evolution of hardware accelerators and interconnects, such optimizations frequently need to be re-engineered for new architectures. Alternative strategies, including sequence parallelism and fine-grained operation decomposition, have been explored to improve TP efficiency, but communication latency remains a fundamental limitation in large-scale distributed inference.Researchers from institutions like USC, MIT, and Princeton introduced Ladder Residual, a model modification that enhances Tensor Parallelism efficiency by decoupling computation from communication. Instead of altering low-level kernels, Ladder Residual reroutes residual connections, enabling overlapping and reducing communication bottlenecks. Applied to a 70B-parameter Transformer, it achieves a 30% inference speedup across eight GPUs. Training 1B and 3B Ladder Transformer models from scratch maintains performance parity with standard transformers. Additionally, adapting Llama-3.1-8B with minimal retraining preserves accuracy. This scalable approach facilitates multi-GPU and cross-node deployment and broadly applies to residual-based architectures.Utilizing Ladder Residual architecture, the Ladder Transformer enhances Transformer efficiency by enabling communication-computation overlap. It routes residual connections differently, allowing asynchronous operations that reduce communication bottlenecks. Testing on various model sizes, including the Llama-3 70B, shows up to a 29% speedup in inference throughput, with gains reaching 60% under slower communication settings. By incorporating Ladder Residual, the architecture achieves faster token processing and lower latency without sacrificing model accuracy. The approach proves beneficial even in cross-node setups, demonstrating over 30% improvement in large-scale models like the Llama 3.1 405B, making it effective for multi-GPU deployments.The study evaluates Ladder Residuals impact on model performance by training Ladder Transformers (1B and 3B) from scratch and comparing them with standard and parallel Transformers on 100B tokens from FineWeb-edu. Results show that Ladder Transformers perform similarly to standard models on a 1B scale but slightly worse at 3B. We also apply Ladder Residual to Llama-3.1-8B-Instructs upper layers, finding an initial performance drop in generative tasks, recoverable through fine-tuning. Post-adaptation, inference speed improves by 21% with minimal performance loss. The findings suggest Ladder Residual can accelerate models without significant degradation, with the potential for further optimization through advanced adaptation techniques.In conclusion, the study proposes Ladder Residual, an architectural modification that enables efficient communication-computation overlap in model parallelism, improving speed without compromising performance. Applied to Tensor Parallelism, it enhances large model inference by decoupling communication from computation. Testing on Ladder Transformers (1B and 3B models) shows they perform similarly to standard Transformers, achieving over 55% speedup. Applying Ladder Residual to Llama-3.1-8B requires only light retraining for a 21% inference speedup, retaining original performance. This approach reduces the need for expensive interconnects, suggesting the potential for optimizing model architectures and inference systems together. Code for replication is provided.Check outthePaper.All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our75k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Microsoft AI Researchers Introduce Advanced Low-Bit Quantization Techniques to Enable Efficient LLM Deployment on Edge Devices without High Computational CostsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google DeepMind Achieves State-of-the-Art Data-Efficient Reinforcement Learning RL with Improved Transformer World ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Deep Agent Released R1-V: Reinforcing Super Generalization in Vision-Language Models with Cost-Effective Reinforcement Learning to Outperform Larger ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/ARM: Enhancing Open-Domain Question Answering with Structured Retrieval and Efficient Data Alignment [Recommended] Join Our Telegram Channel
    0 Yorumlar ·0 hisse senetleri ·41 Views
  • Important Computer Vision Papers for the Week from 27/01 to 01/02
    towardsai.net
    Important Computer Vision Papers for the Week from 27/01 to 01/02 0 like February 7, 2025Share this postLast Updated on February 7, 2025 by Editorial TeamAuthor(s): Youssef Hosni Originally published on Towards AI. Stay Updated with Recent Computer Vision ResearchThis member-only story is on us. Upgrade to access all of Medium.Every week, researchers from top research labs, companies, and universities publish exciting breakthroughs in diffusion models, vision language models, image editing and generation, video processing and generation, and image recognition.This article provides a comprehensive overview of the most significant papers published in the Fifth week of January 2025, highlighting the latest research and advancements in computer vision.Whether youre a researcher, practitioner, or enthusiast, this article will provide valuable insights into the state-of-the-art techniques and tools in computer vision.Diffusion ModelsVision Language ModelsMost insights I share in Medium have previously been shared in my weekly newsletter, To Data & Beyond.If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.Subscribe below to become an AI leader among your peers and receive content not present in any other platform, including Medium:Data Science, Machine Learning, AI, and what is beyond them. Click to read To Data & Beyond, by Youssef Hosni, ayoussefh.substack.comRecent advancements in 3D content generation from text or a single image struggle with limited high-quality Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Yorumlar ·0 hisse senetleri ·39 Views
  • Save $1,000 Off the Lenovo Legion 7 Intel Core i9 RTX 4080 Super Gaming PC
    www.ign.com
    Lenovo has dropped the price of its powerful Lenovo Legion Tower 7i Gen 8 RTX 4080 Super gaming PC to only $2,232.49 after coupon code: "EXTRAFIVE". In our recent Legion Tower 7 review (the sample we received wasn't as powerful as this one), Jacqueline Thomas wrote that "The Legion Tower 7i is an incredibly powerful gaming PC, especially for the money youre likely going to be paying for it. If all you want is a powerful, upgradeable machine without having to go through the trouble of building it yourself, its hard to find many gaming PCs better than this one."Lenovo Legion Tower 7i Gen 8 RTX 4080 SuperGaming PCLenovo Legion Tower 7i Intel Core i9-14900KF RTX 4080 Super Gaming PC with 32GB RAM, 2TB SSDThe Lenovo Legion Tower 7i Gen 8 is equipped with an Intel Core i9-14900KF CPU, GeForce RTX 4080 SUPER GPU, 32GB of DDR5-4000MHz RAM, and a 2TB PCIe NVMe SSD. The unlocked 14th gen Intel Core i9-14900KF Raptor Lake "Refresh" CPU boasts a max Turbo clock of 6GHz with 24 cores, 32 threads, and a 36MB cache. It's still one of the most powerful Intel CPUs available (in many cases it even beats out the new Intel Core Ultra 9 285K). It's cooled by a robust 360mm all-in-one liquid cooling system that rivals many enthusiast setups.The RTX 4080 Super is Nvidia's second most powerful RTX 40 series card. You'll be able to handle any game in 4K at high frame rates, even with ray tracing enabled. It's 5-10% faster in performance than the RTX 4080 thanks to its higher base clock speed, higher CUDA core count, and higher memory bandwidth. It trades blows with AMD's most powerful GPU, the Radeon RX 7900 XTX, but the RTX 4080 Super pulls ahead in ray tracing performance and where DLSS 3.0 is supported. It's nearly identical in performance to the new RTX 5080 GPU and also has the same amount of VRAM. Nvidia GeForce RTX 4080 Super Review by Jacqueline Thomas"The Nvidia GeForce RTX 4080 Super, just like the original RTX 4080, is a 4K graphics card through and through. In every test, this GPU excels at that high resolution with plenty of room to spare, especially if you utilize Nvidia's DLSS. Even in games without DLSS or ray tracing, like Total War: Warhammer 3, the RTX 4080 Super is a monster at 4K, scoring 79fps, compared to 76 from the RTX 4080 and 86 from the AMD Radeon RX 7900 XTX."Why Choose Lenovo?Lenovo Legion gaming PCs and laptops generally feature better and more rugged build quality than what you'd find from most other prebuilt PCs. For desktop PCs in particular, people like the fact that Lenovo does not use many proprietary components in their rigs, so the PCs are much easier to upgrade with easiy obtainable, off-the-shelf parts. For laptops, Lenovo generally does not throttle the GPU on most of their Legion laptops, so you should expect maximum performance from a given GPU. Lenovo generally includes a solid 1 year warranty with the option to extend.Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Yorumlar ·0 hisse senetleri ·39 Views