• 0 Comments ·0 Shares ·180 Views
  • A MESSAGE FROM THE INTERNATIONAL CHAPTER CHAIR
    www.architecture.com.au
    Justin Hill International Chapter ChairThank you all for a wonderful year! I have been reflecting on the past years activities and the many people who have contributed to the work we have done this year. I would like to thank the International Chapter Institute staff, and the current International Chapter Councillors all of whom have worked together to deliver some excellent results.I would like to thank the current Chapter Councillors for all of their efforts and time given over the past year, many of whom join us online for meetings at various times of the day and night depending on their time zones. It was wonderful to meet face to face in Hobart, Tasmania this year and collaborate with both the Tasmania Chapter, SONA and the creative directors of the Australasian Student Architecture Congress Ground Matters on Bl?ck Party, a combined celebration of International and Tasmanian Chapters awards presentations and the closing party for Ground Matters!This year we are joined by Amy Learmonth as EmAGN (Emerging Architects and Graduates network) representative on International Chapter Council, we are working to grow our EmAGN membership and engagement with this demographic within the International Chapter and having Amy in this role provides a vital connection back to the National EmAGN committee and to our emerging architect members. We have advocated for and been successful in changing criteria of the Emerging Architect Prize, the prize is now to be awarded annually rather than biannually, and further the International Chapter winner will now also be eligible to proceed to the national competition.In addition, this year we have hosted Boarding Pass events in Hong Kong, Singapore and Kuala Lumpur, collaborating with the respective Institutes of Architecture in these countries. These events are always an engaging evening, an opportunity to connect with colleagues and share experiences working in architecture internationally. I encourage you to attend the next Boarding Pass event in your international region more on that next year!It can be a challenge to connect when living and working internationally and the International Chapter is delighted to be hosting some Festive Gatherings online to ensure all of our members can connect with us and share in some festive trivia. This is a wonderful opportunity to meet International Chapter Councillors in your region and our leadership team, representing you as members. Those who are not yet members are welcome to join and find out more.Finally, I would like to express my gratitude to all of our International Chapter members for their contribution to the industry this year and wish everyone a safe and relaxing holiday break.The post A MESSAGE FROM THE INTERNATIONAL CHAPTER CHAIR appeared first on Australian Institute of Architects.
    0 Comments ·0 Shares ·171 Views
  • A MESSAGE FROM THE TAS CHAPTER PRESIDENT
    www.architecture.com.au
    Daniel Lane Tasmanian Chapter PresidentIn preparation for the recently held Presidents Lunch, which celebrates our Chapter Fellows, Life Fellows and Councillors, I have been reflecting on the past years activities and the many people who have contributed to the work we have done this year. I would like to thank the Tas Chapter Institute staff, Jen, Fi, Katie, Loren and Nina. Its amazing to see how much work is undertaken by this committed team. It really is not until you are in this position (Chapter President) that you realise what is actually being done. An incredible amount of work is undertaken, and it is often unseen; so, I would like to recognise that and thank them for that work.I would like to thank the current Chapter Councillors. This year we have tried to instigate some change. For many years, we have unfortunately seen the demise of many chapter committees, but this year we have tried to rectify that. It is not easy, and it wont happen overnight, but I would like to thank our councillors for jumping into and assisting many of the committees by taking the first steps in re-invigorating their existence. In addition, this year we have looked at getting back to our roots and focusing on the issues that relate directly with practice. We have all had our issues with planning, with heritage, and with the exorbitant amount services engineering requirements for projects. I need not go on!So, we have provided strong advocacy at both a local and a state level to identify our concerns and to attempt to overcome some of the situations we are currently facing. Through these meetings and discussions, we believe our voice has been heard. We have been invited to the table, and we firmly believe our concerns are being considered and acted upon.Changes will be made. Some have already been seen, and some are on the way which will hopefully assist with our endeavours.Finally, I would like to express my gratitude to all of our Tasmanian Chapter members for their contribution to the industry this year and wish everyone a safe and relaxing holiday break. I look forward to working with you all again in the new year.The post A MESSAGE FROM THE TAS CHAPTER PRESIDENT appeared first on Australian Institute of Architects.
    0 Comments ·0 Shares ·156 Views
  • How to take stock of the year that was
    blog.medium.com
    How to take stock of the year that wasPublished inThe Medium BlogSent as aNewsletter3 min readJust now-- As we mentioned way back in February, the futures coming sooner than you think.Issue #229: the new rules of media + a cooking tipIn T-15 days it will be 2025. Years are artificial, human-created markers of time, but they serve a purpose they give life shape.I try to take some time during dead week (the week between Christmas and New Years) to figure out where Im going and what I can do better. On Friday, I shared leadership coach Tutti Taygerlys year-end questionnaire, which is basically a list of journaling prompts. A few: Which relationships of yours grew the most this year? Where did you build unexpected trust? Which behaviors have you let go of?Inspired by that, I went looking for more succinct ways to take stock of your year. Rochelle Deans suggests looking back through wherever you naturally document your life (texts, sent emails, posts to your social media platform of choice, your photos app) and identifying highlights and lowlights.Or, if you want a mnemonic, business strategist Dev Singh has a dead-simple acronym, the Four Ls:Loved: When did you feel most alive this year? What did you genuinely enjoy?Longed: What did you want this year? What didnt materialize? What disappointed you?Loathed: What did you hate? What felt like a waste of time?Lessons: What did you learn? What do you still have to learn?Lastly, in a Medium story from a few years back, executive coach Andrea Mignolo encouraged everyone to write a letter for yourself to read a year from now (i.e., at the end of 2025). Theres really no wrong way to write this. You can tell your future self what youre committed to, excited about, or nervous about. Seal it up and dont open it until this time next year.Whatever your end-of-year ritual (or none), remember: How you end the year is how the next one begins. Harris Sockel 3 of my open tabsThe new rules of media: Most people are less obsessed with news and newness than journalists think. (One Thing)A response to last Wednesdays issue on ChatGPTs favorite words, which should really be a Medium post of its own: William Bennett, who taught a course on sci-fi at Tufts, articulates ChatGPTs Achilles heel better than I could. GPT fails at subtext because subtext depends on being aware of other selves, or creating a we situation where both you and I have something unspoken at stake.Bestselling novelist Tao Lin records himself writing a 12,000-word essay about his cat and narrates his editorial decisions along the way (like splitting large, unwieldy paragraphs into smaller ones). Its meditative to watch another writers keystrokes. In the words of one commenter, thanks for sharing your writing and editing process, your thoughts, the beautiful classical music and the sound of nature and insects outside and the rain. Your daily dose of practical wisdomTo boil eggs whose shells slide off elegantly, please for the love of breakfast listen to J. Kenji Lopez Alt, whos done more tests on egg boiling than anyone. Gently nestle them into about one inch of boiling water and leave them there, lid on, for nine minutes. (This changed my life. Id been doing it wrong for years.)
    0 Comments ·0 Shares ·146 Views
  • Its Been More Than 300 Years Since Japans Breathtaking Mount Fuji Last Erupted
    www.smithsonianmag.com
    From a photogenic distance, Mount Fuji is a nearly perfect, usually snow-capped cone, protruding out of the Japanese island of Honshu and into the clear blue skies. But another view reveals the site of Mount Fujis last confirmed eruption, which began on December 16, 1707, during Japans Hoei era.The image of a tranquil Fuji became enshrined in Katsushika Hokusais 19th-century woodblock series Thirty-Six Views of Mount Fuji, which, according to Franz Lidz in Smithsonian, juxtaposed the mountains calm permanence with the turbulence of nature and flux of daily life.But viewed from the southeastthe least admired view, according to anthropologist Frederick Starran imperfection hints at the mountains turbulent past. This excrescence, as Starr puts it, is the site of the 1707 eruption.The Hoei eruption, as its known, was anything but tranquil. It was likely triggered by an 8.6-magnitude earthquake that struck off the coast of Japan on October 28, one of the most violent seismic events in Japanese history. A drawing depicting the Hei eruption Public domain via Wikimedia CommonsThe earthquake triggered massive tsunamis that killed thousands of Japanese and compressed magma chambers in Mount Fuji, building pressure and blocking off release vents simultaneously. Over the next 49 days, hot magma from deep within the volcano mixed with cooler magma, and stress within the volcano built until, on December 16, the pressure became too intense and the volcano began to erupt.The destruction was immense. The eruption spewed tons of tephrarock fragments ejected from the volcanointo Yokohama and Tokyo, some 60 miles east of Fuji. The cities were blanketed in over an inch and half of ash. The volcano released nearly 30 billion cubic feet of ash, leaving the atmosphere so densely clouded that residents had to light candles to see even during the daytime. Flows of mud, rock and other debris known as lahars devastated farms and villages in the volcanos proximity, and the buildup of ash in rivers and streams caused further flooding.While no official death toll was issued for the eruption, which lasted until January 1, many residents suffered respiratory problems related to the ash, especially in densely populated cities. In the countryside, devastated farmlands meant low food supply and starvation. Famine lasted a decade.The Hoei event was Mount Fujis biggest eruption in the Holocene epoch, the past 11,700 or so years of Earths history. On the volcanic explosivity index, the eruption scored a five out of eightvery large, based on the amount of debris displacedcomparable to the Mount St. Helens eruption of 1980.Although Fuji hasnt had any confirmed eruptions since Hoei, it might not be quiet forever. It is still considered an active volcano. Japanese authorities have produced predictions about where craters are likely to occur on Mount Fuji, as well as evacuation areas and guidelines for the tens of millions of residents in Tokyos metropolitan area should it erupt again.Meanwhile, the mark of the Hoei eruption is still evident on Fujis face, a reminder of its fiery past. As he hiked up the volcano a century ago, Starr described it as a regular crater-cone, bare of vegetation, composed of fresh-looking cinders the result of the last great eruption. Even that once spewing cavity is tranquil now, too, a silent witness to one of historys most violent volcanic explosions.Get the latest stories in your inbox every weekday.Filed Under: Japan, Mountains, Natural Disasters, On This Day in History, Volcanoes
    0 Comments ·0 Shares ·114 Views
  • Capcom exploring further IP revivals following Okami, Onimusha reveals
    www.gamesindustry.biz
    Capcom exploring further IP revivals following Okami, Onimusha revealsPublisher tells investors it will focus on "reactivating dormant IPs" in addition to new launchesImage credit: Capcom News by James Batchelor Editor-in-chief Published on Dec. 16, 2024 Capcom is planning to revive more of its dormant franchises following the reveal of an Okami sequel and the return of Onimusha last week.In a press release for Onimusha: Way of the Sword on the company's investor relations site, the publisher hinted that more of its classic IP will return in the coming years, although it stopped short of naming specific properties."In addition to regularly releasing major new titles each year, Capcom is focusing on re-activating dormant IPs that havent had a new title launch recently," the publisher wrote."The company is working to further enhance corporate value by leveraging its rich library of content, which includes reviving past IPs like the two titles announced above, in order to continuously produce highly efficient, high-quality titles."Capcom announced both Onimusha: Way of the Sword and an untitled Okami sequel during The Game Awards last week.While both series have received remasters in recent years, there has not been new Okami since 2010's Nintendo DS spin-off Okamiden. And while Onimusha received a browser game, Onimusha Soul, in 2012 and a virtual reality title Shadow Team earlier this year, there has not been a new fully-fledged Onimusha since 2006's Dawn of Dreams.The announcement of the Okami sequel was paired with another notable return, namely bringing the game's creator Hideki Kamiya back into the fold at the head of a brand new studio, Clovers.
    0 Comments ·0 Shares ·121 Views
  • Apples foldable iPad could be like two iPad Pros side-by-side
    www.theverge.com
    Apple hopes to release a foldable 18.8-inch creaseless iPad by about 2028, Bloombergs Mark Gurman writes in todays Power On newsletter. The companys industrial design group has reportedly managed to create prototypes of this device that have a nearly invisible crease and would essentially be like two iPad Pros side-by-side.Rumors of a folding iPad have been floating in the ether for years, now. Recent ones include a smaller model that Apple would release in 2026 or 2027. Gurmans write-up today has strong echoes of the gargantuan 20-inch folding iPad / MacBook hybrid he detailed in 2022. That doesnt seem to mean that it will run macOS, but Gurman claims that it will have elements of both Macs and iPads and that iPadOS should be advanced enough to run macOS apps by 2028.Considering that Macs run iPhone and iPad apps now, its not outrageous to think the street could go both ways in time. It might help the value proposition, too; the 13-inch iPad Pro starts at $1,299, and whatever financial damage an iPad twice that size could incur would be a little easier to take coupled with the salve of being able to run macOS apps on it.Gurman says a foldable iPhone is still in the works, though he doesnt expect that before 2026 at the earliest, as other rumors have said. He also says information from his sources lines up with an alleged Apple internal display roadmap that made the rounds recently, tipping the 18.8-inch foldable iPad and Apples plans to release OLED MacBook Pros in 2026, followed by a MacBook Air OLED update in 2027.
    0 Comments ·0 Shares ·113 Views
  • 2025 in tech: whos in and whos out
    www.theverge.com
    Hello! Im here from the future. And I have some news. 12 months from now, all the Big Tech CEOs are still in their jobs, everybodys using folding phones, Apple made a TV, and Nvidia is the most valuable company in the history of the universe. Wild year, huh? Or maybe not? Its hard to remember. Time travel messes with your memory a little.On this episode of The Vergecast, the second installment of our two-part 2025 preview, we debate some seriously iffy storylines from the end of 2025. David, our resident time traveler, brings us some big stories that either did or didnt happen in the year to come, and Nilay Patel and Wall Street Journal columnist Joanna Stern have to help figure out whats real and what isnt. Will someone really buy Snap? Is GTA VI going to be the biggest game ever? Will Bluesky continue to ascend and leave Threads in its wake? Nobody knows yet, not even the time traveler, but we have some thoughts and ideas.As was the case with last weeks episode, were keeping score. Heres how it works: each host has to decide, for each 2025 news story, whether itll be real or not by the end of the year. Every correct guess earns you a point; every incorrect guess costs you one. At the end of the year, well total the scores, combine them with last weeks guesses, and buy the winner the coolest gadget of 2025, whatever that turns out to be. (Were going to need your help deciding, too, but well come back to that.)Dont read the below until youve listened to or watched the episode, but for the tape, here are our predictions:Tim Cook is still the CEO of Apple. In fact, all four Big Tech CEOs are the same. (Nilay, Joanna, and David all say yes.)Nvidia is the most valuable company in the world. (Nilay says no, Joanna and David say yes.)Somebody acquired Snap. (All say no.)OpenAI is officially a for-profit company. (All say yes.)OpenAI is a profitable company. (All say no.)The government is breaking up one of the big tech companies. (Nilay and David say yes, Joanna says no.)Weve had a huge, society-sized AI scandal. (All say yes.)One or more of Max, Paramount Plus, and Peacock no longer exists. (All say yes.)Netflixs live stuff is working, and it basically killed cable already. (All say no.)GTA VI is a huge, giant, smashing success. (Nilay and David say yes, Joanna refuses.)Folding phones are legit mainstream now. (All say no.)The Pixel 10 is by far the best and most successful Android phone. (All say no.)The Nintendo Switch came out, and its kind of a dud. (all say no.)Apple is actually making a TV. (Nilay and Joanna say no, David says yes.)The new Alexa? Its sick. Everybodys using it. (Nilay and Joanna say no, David says yes.)Waymos currently having its big moment. (Joanna says yes, Nilay and David say no.)BlueSky is more relevant than Threads. (All say yes.)BlueSky is actually as big as Threads. (All say no.)Apple launched a search engine. (Nilay and Joanna say yes, David says no.)
    0 Comments ·0 Shares ·117 Views
  • Nexa AI Releases OmniAudio-2.6B: A Fast Audio Language Model for Edge Deployment
    www.marktechpost.com
    Audio language models (ALMs) play a crucial role in various applications, from real-time transcription and translation to voice-controlled systems and assistive technologies. However, many existing solutions face limitations such as high latency, significant computational demands, and a reliance on cloud-based processing. These issues pose challenges for edge deployment, where low power consumption, minimal latency, and localized processing are critical. In environments with limited resources or strict privacy requirements, these challenges make large, centralized models impractical. Addressing these constraints is essential for unlocking the full potential of ALMs in edge scenarios.Nexa AI has announced OmniAudio-2.6B, an audio-language model designed specifically for edge deployment. Unlike traditional architectures that separate Automatic Speech Recognition (ASR) and language models, OmniAudio-2.6B integrates Gemma-2-2b, Whisper Turbo, and a custom projector into a unified framework. This design eliminates the inefficiencies and delays associated with chaining separate components, making it well-suited for devices with limited computational resources.OmniAudio-2.6B aims to provide a practical, efficient solution for edge applications. By focusing on the specific needs of edge environments, Nexa AI offers a model that balances performance with resource constraints, demonstrating its commitment to advancing AI accessibility.Technical Details and BenefitsOmniAudio-2.6Bs architecture is optimized for speed and efficiency. The integration of Gemma-2-2b, a refined LLM, and Whisper Turbo, a robust ASR system, ensures a seamless and efficient audio processing pipeline. The custom projector bridges these components, reducing latency and enhancing operational efficiency. Key performance highlights include:Processing Speed: On a 2024 Mac Mini M4 Pro, OmniAudio-2.6B achieves 35.23 tokens per second with FP16 GGUF format and 66 tokens per second with Q4_K_M GGUF format, using the Nexa SDK. In comparison, Qwen2-Audio-7B, a prominent alternative, processes only 6.38 tokens per second on similar hardware. This difference represents a significant improvement in speed.Resource Efficiency: The models compact design minimizes its reliance on cloud resources, making it ideal for applications in wearables, automotive systems, and IoT devices where power and bandwidth are limited.Accuracy and Flexibility: Despite its focus on speed and efficiency, OmniAudio-2.6B delivers high accuracy, making it versatile for tasks such as transcription, translation, and summarization.These advancements make OmniAudio-2.6B a practical choice for developers and businesses seeking responsive, privacy-friendly solutions for edge-based audio processing.Performance InsightsBenchmark tests underline the impressive performance of OmniAudio-2.6B. On a 2024 Mac Mini M4 Pro, the model processes up to 66 tokens per second, significantly surpassing the 6.38 tokens per second of Qwen2-Audio-7B. This increase in speed expands the possibilities for real-time audio applications.For example, OmniAudio-2.6B can enhance virtual assistants by enabling faster, on-device responses without the delays associated with cloud reliance. In industries such as healthcare, where real-time transcription and translation are critical, the models speed and accuracy can improve outcomes and efficiency. Its edge-friendly design further enhances its appeal for scenarios requiring localized processing.ConclusionOmniAudio-2.6B represents an important step forward in audio-language modeling, addressing key challenges such as latency, resource consumption, and cloud dependency. By integrating advanced components into a cohesive framework, Nexa AI has developed a model that balances speed, efficiency, and accuracy for edge environments.With performance metrics showing up to a 10.3x improvement over existing solutions, OmniAudio-2.6B offers a robust, scalable option for a variety of edge applications. This model reflects a growing emphasis on practical, localized AI solutions, paving the way for advancements in audio-language processing that meet the demands of modern applications.Check out the Details and Model on Hugging Face. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence.The post Nexa AI Releases OmniAudio-2.6B: A Fast Audio Language Model for Edge Deployment appeared first on MarkTechPost.
    0 Comments ·0 Shares ·117 Views
  • DeepSeek-AI Open Sourced DeepSeek-VL2 Series: Three Models of 3B, 16B, and 27B Parameters with Mixture-of-Experts (MoE) Architecture Redefining Vision-Language AI
    www.marktechpost.com
    Integrating vision and language capabilities in AI has led to breakthroughs in Vision-Language Models (VLMs). These models aim to process and interpret visual and textual data simultaneously, enabling applications such as image captioning, visual question answering, optical character recognition, and multimodal content analysis. VLMs play an important role in developing autonomous systems, enhanced human-computer interactions, and efficient document processing tools by bridging the gap between these two data modalities. Still, the complexity of handling high-resolution visual data alongside diverse textual inputs remains a main challenge in this domain.Existing research has addressed some of these limitations using static vision encoders that lack adaptability to high-resolution and variable input sizes. Pretrained language models used with vision encoders often introduce inefficiencies, as they are not optimized for multimodal tasks. While some models incorporate sparse computation techniques to manage complexity, they frequently need to improve accuracy across diverse datasets. Also, the training datasets used in these models often need more diversity and task-specific granularity, further hindering performance. For instance, many models underperform in specialized tasks like chart interpretation or dense document analysis due to these constraints.Researchers from DeepSeek-AI have introduced the DeepSeek-VL2 series, a new generation of open-source mixture-of-experts (MoE) vision-language models. These models leverage cutting-edge innovations, including dynamic tiling for vision encoding, a Multi-head Latent Attention mechanism for language tasks, and a DeepSeek-MoE framework. DeepSeek-VL2 offers three configurations with different activated parameters (activated parameters refer to the subset of a models parameters that are dynamically utilized during a specific task or computation):This scalability ensures adaptability for various application needs and computational budgets.The architecture of DeepSeek-VL2 is designed to optimize performance while minimizing computational demands. The dynamic tiling approach ensures that high-resolution images are processed without losing critical detail, making it particularly effective for document analysis and visual grounding tasks. Also, the Multi-head Latent Attention mechanism allows the model to manage large volumes of textual data efficiently, reducing the computational overhead typically associated with processing dense language inputs. The DeepSeek-MoE framework, which activates only a subset of parameters during task execution, further enhances scalability and efficiency. DeepSeek-VL2s training incorporates a diverse and comprehensive multimodal dataset, enabling the model to excel across various tasks, including optical character recognition (OCR), visual question answering, and chart interpretation.While checking for performances, the small configuration, for example, achieved an impressive 92.3% accuracy on OCR tasks, outperforming existing models by a significant margin. In visual grounding benchmarks, the model demonstrated a 15% improvement in precision compared to its predecessors. Also, DeepSeek-VL2 showed remarkable efficiency, requiring 30% fewer computational resources than comparable models while maintaining state-of-the-art accuracy. The results also highlighted the models ability to generalize across tasks, with its Standard variant achieving leading scores in multimodal reasoning benchmarks. These achievements underscore the effectiveness of the proposed models in addressing the challenges associated with high-resolution image and text processing.Several takeaways from the DeepSeek-VL2 model series are as follows:By dividing high-resolution images into smaller tiles, the models improve feature extraction and reduce computational overhead. This approach is useful for dense document analysis and complex visual layouts.The availability of tiny (3B), small (16B), and standard (27B) configurations ensures adaptability to various applications, from lightweight deployments to resource-intensive tasks.Using a comprehensive dataset encompassing OCR and visual grounding tasks enhances the models generalizability and task-specific performance.The sparse computation framework activates only necessary parameters, enabling reductions in computational costs without compromising accuracy.In conclusion, the DeepSeek-VL2 is an open-source vision language model series with three variants (1.8B, 2.8B, and 4.5B activated parameters). The research team has introduced a model series that excels in real-world applications by addressing critical limitations in scalability, computational efficiency, and task adaptability. Its innovative, dynamic tiling and Multi-head Latent Attention mechanisms enable precise image processing and efficient text handling, achieving state-of-the-art results across tasks like OCR and visual grounding. The model series sets a new standard in AI performance with scalable configurations and a comprehensive multimodal dataset.Check out the Models on Hugging Face. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Comments ·0 Shares ·118 Views