• Moonshot AI Research Introduce Mixture of Block Attention (MoBA): A New AI Approach that Applies the Principles of Mixture of Experts (MoE) to the Attention Mechanism
    www.marktechpost.com
    Efficiently handling long contexts has been a longstanding challenge in natural language processing. As large language models expand their capacity to read, comprehend, and generate text, the attention mechanismcentral to how they process inputcan become a bottleneck. In a typical Transformer architecture, this mechanism compares every token to every other token, resulting in computational costs that scale quadratically with sequence length. This problem grows more pressing as we apply language models to tasks that require them to consult vast amounts of textual information: long-form documents, multi-chapter books, legal briefs, or large code repositories. When a model must navigate tens or even hundreds of thousands of tokens, the cost of naively computing full attention becomes prohibitive.Previous efforts to address this issue often rely on imposing fixed structures or approximations that may compromise quality in certain scenarios. For example, sliding-window mechanisms confine tokens to a local neighborhood, which can obscure important global relationships. Meanwhile, approaches that radically alter the fundamental architecturesuch as replacing softmax attention with entirely new constructscan demand extensive retraining from scratch, making it difficult to benefit from existing pre-trained models. Researchers have sought a method that maintains the key benefits of the original Transformer designits adaptability and ability to capture wide-ranging dependencieswithout incurring the immense computational overhead associated with traditional full attention on extremely long sequences.Researchers from Moonshot AI, Tsinghua University, and Zhejiang University introduce Mixture of Block Attention (MoBA), an innovative approach that applies the principles of Mixture of Experts (MoE) to the attention mechanism. By partitioning the input into manageable blocks and using a trainable gating system to decide which blocks are relevant for each query token, MoBA addresses the inefficiency that arises when a model has to compare every token to every other token. Unlike approaches that rigidly enforce local or windowed attention, MoBA allows the model to learn where to focus. This design is guided by the principle of less structure, meaning the architecture does not predefine exactly which tokens should interact. Instead, it delegates those decisions to a learned gating network.A key feature of MoBA is its capacity to function seamlessly with existing Transformer-based models. Rather than discarding the standard self-attention interface, MoBA operates as a form of plug-in or substitute. It maintains the same number of parameters, so it does not bloat the architecture, and it preserves causal masking to ensure correctness in autoregressive generation. In practical deployments, MoBA can be toggled between sparse and full attention, enabling the model to benefit from speedups when tackling extremely long inputs while preserving the fallback to standard full attention in layers or phases of training where it might be desirable.Technical Details and BenefitsMoBA centers on dividing the context into blocks, each of which spans a consecutive range of tokens. The gating mechanism computes an affinity score between a query token and each block, typically by comparing the query with a pooled representation of the blocks keys. It then chooses the top-scoring blocks. As a result, only those tokens in the most relevant blocks contribute to the final attention distribution. The block that contains the query itself is always included, ensuring local context remains accessible. At the same time, a causal mask is enforced so that tokens do not attend to positions in the future, preserving the left-to-right autoregressive property.Because of this procedure, MoBAs attention matrix is significantly sparser than in the original Transformer. Yet, it remains flexible enough to allow queries to attend to faraway information when needed. For instance, if a question posed near the end of a text can only be answered by referencing details near the beginning, the gating mechanism can learn to assign a high score to the relevant earlier block. Technically, this block-based method reduces the number of token comparisons to sub-quadratic scales, bringing efficiency gains that become especially evident as context lengths climb into the hundreds of thousands or even millions of tokens.Another appealing aspect of MoBA is its compatibility with modern accelerators and specialized kernels. In particular, the authors combine MoBA with FlashAttention, a high-performance library for fast, memory-efficient exact attention. By carefully grouping the querykeyvalue operations according to which blocks have been selected, they can streamline computations. The authors report that at one million tokens, MoBA can yield roughly a sixfold speedup compared to conventional full attention, underscoring its practicality in real-world use cases.Results and InsightsAccording to the technical report, MoBA demonstrates performance on par with full attention across a variety of tasks, while offering significant computational savings when dealing with long sequences. Tests on language modeling data show that MoBAs perplexities remain close to those of a full-attention Transformer at sequence lengths of 8,192 or 32,768 tokens. Critically, as the researchers gradually extend context lengths to 128,000 and beyond, MoBA retains robust long-context comprehension. The authors present trailing token evaluations, which concentrate on the models ability to predict tokens near the end of a long promptan area that typically highlights weaknesses of methods relying on heavy approximations. MoBA effectively manages these trailing positions without any drastic loss in predictive quality.They also explore the sensitivity of the approach to block size and gating strategies. In some experiments, refining the granularity (i.e., using smaller blocks but selecting more of them) helps the model approximate full attention more closely. Even in settings where MoBA leaves out large portions of the context, adaptive gating can identify the blocks that truly matter for the query. Meanwhile, a hybrid regime demonstrates a balanced approach: some layers continue to use MoBA for speed, while a smaller number of layers revert to full attention. This hybrid approach can be particularly beneficial when performing supervised fine-tuning, where certain positions in the input might be masked out from the training objective. By preserving full attention in a few upper layers, the model can retain broad context coverage, benefiting tasks that require more global perspective.Overall, these findings suggest that MoBA is well-suited for tasks that involve extensive context, such as reading comprehension of long documents, large-scale code completion, or multi-turn dialogue systems where the entire conversation history becomes essential. Its practical efficiency gains and minimal performance trade-offs position MoBA as an appealing method for making large language models more efficient at scale.ConclusionIn conclusion, Mixture of Block Attention (MoBA) provides a pathway toward more efficient long-context processing in large language models, without an extensive overhaul of the Transformer architecture or a drop in performance. By adopting Mixture of Experts ideas within the attention module, MoBA offers a learnable yet sparse way to focus on relevant portions of very long inputs. The adaptability inherent in its designparticularly its seamless switching between sparse and full attentionmakes it especially attractive for ongoing or future training pipelines. Researchers can fine-tune how aggressively to trim the attention pattern, or selectively use full attention for tasks that demand exhaustive coverage.Though much of the attention to MoBA focuses on textual contexts, the underlying mechanism may also hold promise for other data modalities. Wherever sequence lengths are large enough to raise computational or memory concerns, the notion of assigning queries to block experts could alleviate bottlenecks while preserving the capacity to handle essential global dependencies. As sequence lengths in language applications continue to grow, approaches like MoBA may play a critical role in advancing the scalability and cost-effectiveness of neural language modeling.Check outthePaper and GitHub Page.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our75k+ ML SubReddit. Asif RazzaqWebsite| + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek AI Introduces NSA: A Hardware-Aligned and Natively Trainable Sparse Attention Mechanism for Ultra-Fast Long-Context Training and InferenceAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Stepwise Python Code Implementation to Create Interactive Photorealistic Faces with NVIDIA StyleGAN2ADAAsif Razzaqhttps://www.marktechpost.com/author/6flvq/OpenAI introduces SWE-Lancer: A Benchmark for Evaluating Model Performance on Real-World Freelance Software Engineering WorkAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Scale AI Research Introduces J2 Attackers: Leveraging Human Expertise to Transform Advanced LLMs into Effective Red Teamers
    0 Yorumlar ·0 hisse senetleri ·42 Views
  • Win or Lose Episodes 1-5 Review
    www.ign.com
    Many of Disneys attempts to spin television shows out of its marquee theatrical brands have been hampered by a refusal to think in TV terms its hard to watch the indistinct, truncated episodes of something like Obi-Wan Kenobi or Hawkeye without thinking of the better movies that couldve told these stories. In some ways, Pixars Win or Lose could be accused of a similar waffling between television and film. The animation studios first original TV series (that is, not a series of streaming shorts starring pre-established characters) follows a week leading up to a middle-school softball championship from multiple perspectives, both child and adult, each episode dedicated to a single character. As such, its episodes are distinct from one another, but also quite short sometimes under 20 minutes when you subtract the protracted credits and often lacking even temporary resolution. Presumably this is all building to an extended big-game finale, but with only five of eight episodes provided for advance review, its hard to say whether it will wind up feeling more like a clever use of its chosen medium, or simply a Rashomon-style feature film sliced neatly into pieces.But whats great about Win or Lose, setting it apart from other high-profile Disney+ shows, is that the episodes are too entertaining for this question to matter. After the intermittently inspired retread of Inside Out 2, here is a Pixar project more in tune with the original Inside Out, as well as 2022s Turning Red: a show thats attentive to the lives of tweenage kids and their various caregivers, and expresses its characters feelings with the stylized freedom of great animation.Win or Loses episodes repeatedly start from firm emotional grounding before using animation to bring its characters conflicts to more whimsical life. The first episode introduces the Pickles, the co-ed softball team coached by Dan (Will Forte), and then turns to the struggles of Dans daughter Laurie (Rosie Foss), who is not naturally athletic but wants desperately to prove herself on the field as a way of asserting her worth. (Clearly her parents divorce and its attendant wounds remain fresh.) Her anxiety manifests as a cutely gross blob called Sweaty (Jo Firestone) who sits on her back, growing bigger and weightier with every moment of self-doubt but doesnt come to literal life in the reality of the show. Other characters cant see Sweaty; as with other metaphors in the series an suit of armor on a secretly sensitive teacher and umpire; a shapeshifting girlboss persona for a stressed-out overachiever its a flight of fancy thats not real but not precisely depicted as fantasy, either. The show simply doesnt belabor its transitions between subjective and objective experience, and in doing so, Pixar finds a beautifully fluid approach to the old saw of seeing scenes repeatedly play out from multiple perspectives.Some of these running metaphors can feel a little too orderly, like the attempts to chart, categorize, and bureaucratize the roiling emotions of the Inside Out movies. But as with those films, especially the first one, theres also a teachable utility for the shows all-ages audience. Without getting preachy in fact, often while being hilarious Win or Lose visualizes how both children and adults process their emotional challenges, and how those dimensions are often initially hidden from casual view. Its especially sad, then, that one feeling fear prompted Disney to cut a storyline for the transgender character Kai (Chanel Stewart), a middle-schooler only glimpsed in the first five episodes (and now implicitly retconned into being a straight, cisgender character). The given rationale was that many parents would prefer to discuss certain subjects with their children on their own terms and timeline, a watery truism that somehow wasnt applied to the shows treatment of similarly delicate real-life topics like divorce, economic struggle, social ostracism, or lost love. Were many parents eager for a family show with an entire episode about an adult man navigating dating apps?Win or Lose GalleryIn a perverse way, knowledge of the interference faced by Win or Lose makes the show itself all the more impressive; it could have easily wound up feeling as focus-grouped and over-engineered as any number of more muddled Disney projects. Instead, it hits that sweet spot where kids, especially those close to the ages of the characters, can lock into it, and adults can marvel at its cleverness and honesty. The show proves that Pixar is more than capable of continuing to innovate in family-friendly animation when their parent company lets them.
    0 Yorumlar ·0 hisse senetleri ·54 Views
  • Paymentology: Software Engineer
    weworkremotely.com
    All jobs Software Engineer Posted A few minutes ago Paymentology is the first truly global issuer-processor, giving banks and fintechs the technology, team and experience to rapidly issue and process Mastercard, Visa and UnionPay cards across 50+ countries, at scale.Our advanced, multi-cloud Platform, offering both shared and dedicated processing instances, vast global presence, and richer, real-time data, set us apart as the leader in payments. Apply now Paymentology is the first truly global issuer-processor, giving banks and fintechs the technology, team and experience to rapidly issue and process Mastercard, Visa and UnionPay cards across more than 60 countries, at scale.Our advanced, multi-cloud platform, offering both shared and dedicated processing instances, vast global presence and richer, real-time data, set us apart as the leader in payments.The Software Engineer is responsible for creating, enhancing, and maintaining software applications and systems. This role collaborates with cross-functional teams to comprehend requirements, design solutions, and implement code that conforms to best practices and industry standards. The role may necessitate the capability to work on multiple concurrent projects, proactively review progress, and offer recommendations for process enhancement. Moreover, the Software Engineer must possess robust problem-solving skills, technical expertise, and a dedication to delivering dependable software solutions.Requirements Analysis:Collaborate with product management and tech leads to gather and analyse software requirements, ensuring a clear understanding of project objectives and specifications.Software Development:Design software solutions and architectures that address functional and non-functional requirements, considering scalability, performance, and security.Write clean, efficient, and maintainable code using appropriate programming languages and frameworks, following established coding standards and best practices.Develop and execute comprehensive test plans to validate software functionality, reliability, and performance, including unit tests, integration tests, and end-to-end tests.Identify and resolve technical issues and bugs throughout the software development lifecycle, employing debugging tools and techniques to ensure the stability of software applications.Manage source code repositories using version control systems (e.g. Git), ensuring proper branching, merging, and documentation of changes.Implement unambiguous tasks with limited direction, breaking down portions of projects and contributing to task estimation. Follow standard issue-tracking workflows and processes, facilitated by JIRA.Seek oversight when necessary to validate approaches and escalate roadblocks as needed.Progress Review:Proactively review progress and evaluate results on assigned technical projects, comparing them against plans and specifications.Make adjustments and recommendations based on results to ensure project success.Process Improvement:Provide recommendations to working groups regarding the improvement of specific work practices within Paymentology, such as requirements specification, peer review, and coding standards.Contribute to the enhancement of team processes and documentation.Resolve straightforward problems by implementing discrete solutions, troubleshooting issues, and addressing immediate causes.Documentation:Create and maintain technical documentation, including design documents, user guides, test cases and API documentation, to facilitate knowledge sharing and support future development efforts.Write technical specification documentation and participate in the planning, and review of design and development activities for concurrent projects.Ensure alignment with project objectives and specifications.Adhere to organisational policies, procedures, and regulatory requirements related to software development, security, and data privacy, ensuring compliance with industry standards and regulations.Contribute to task breakdown, estimation, and improvement of team documentation.Collaboration and Communication:Collaborate with engineering teams to develop moderate to complex software applications, leveraging expertise in required languages and technologies.Work closely with cross-functional teams, including product managers, designers, and quality assurance engineers, to deliver high-quality software solutions on time and within budget.Work within defined team processes, collaborating effectively with team members and raising concerns when processes break down or fail.Learning and Development:Stay updated on emerging technologies, industry trends, and best practices in software engineering.Take initiative to expand knowledge and skills through training, self-study, and participation in professional development activities.What it takes to succeed:3-5 years of experience in software development or related fields.Knowledge of one or more programming languages commonly used in software development, such as Java, with experience in the Spring Boot framework for building robust and scalable applications. Understanding of software engineering principles, data structures, algorithms, object-oriented design concepts, clean code, and SOLID principlesFamiliarity with software development tools and technologies, including integrated development environments (IDEs), version control systems (e.g. Git), and issue tracking systems (e.g. JIRA).Understanding of software practices such as Agile development methodologies, code reviews, and continuous integration/continuous deployment (CI/CD) pipelines.Skills in unit test and/or Test-Driven Development.Experience with multicloud kubernetes environments.Experience in leveraging Apache Kafka for building scalable, distributed systems and handling large volumes of data in real-timeFamiliarity with Microsoft Office Suite, including Word, Excel, PowerPoint, and Outlook.Ability to document requirements and specifications.Problem-solving skills.Continuous Learning and development mindset.Teamwork and Collaboration, specifically in remote working companies.Excellent verbal and written communication skills in English.Apply NowLet's start your dream job Apply now Paymentology View company Jobs posted: 108 Related Jobs Remote Full-Stack Programming jobs
    0 Yorumlar ·0 hisse senetleri ·38 Views
  • Your most important customer may be AI
    www.technologyreview.com
    Imagine you run a meal prep company that teaches people how to make simple and delicious food. When someone asks ChatGPT for a recommendation for meal prep companies, yours is described as complicated and confusing. Why? Because the AI saw that in one of your ads there were chopped chives on the top of a bowl of food, and it determined that nobody is going to want to spend time chopping up chives. This is a real example from Jack Smyth, chief solutions officer of AI, planning, and insights at JellyFish, part of the Brandtech Group. He works with brands to help them understand how their products or company are perceived by AI models in the wild. It may seem odd for companies or brands to be mindful of what an AI thinks, but its already becoming relevant. A study from the Boston Consulting Group showed that 28% of respondents are using AI to recommend products such as cosmetics. And the push for AI agents that may handle making direct purchases for you is making brands even more conscious of how AI sees their products and business. The end results may be a supercharged version of search engine optimization (SEO) where making sure that youre positively perceived by a large language model might become one of the most important things a brand can do. Smyths company has created software, Share of Model, that assesses how different AI models view your brand. Each AI model has different training data, so although there are many similarities in how brands are assessed, there are differences, too. For example, Metas Llama model may perceive your brand as exciting and reliable, whereas OpenAIs ChatGPT may view it as exciting but not necessarily reliable. Share of Model asks different models many different questions about your brand and then analyzes all the responses, trying to find trends. Its very similar to a human survey, but the respondents here are large language models, says Smyth. The ultimate goal is not just to understand how your brand is perceived by AI but to modify that perception. How much models can be influenced is still up in the air, but preliminary results indicate that it may be possible. Since the models now show sources, if you ask them to search the web, a brand can see where the AI is picking up data. We have a brand called Ballantines. Its the No. 2 Scotch whisky that we sell in the world. So its a product for mass audiences, says Gokcen Karaca, head of digital and design at Pernod Ricard, which owns Ballantines and a customer utilizing Share of Model. However, Llama was identifying it as a premium product. Ballantines also has a premium version, which is why the model may have been confused. So Karacas team created new assets like ad campaigns for Ballantines mass product, highlighting its universal appeal to counteract the premium image. Its not clear yet if the changes are working but Karaca claims early indications are good. We made tiny changes, and it is taking time. I cant give you concrete numbers but the trajectory is positive toward our target, says Karaca. Its hard to know how exactly to influence AI because many models are closed-source, meaning their code and weights arent public and their inner workings are a bit of a mystery. But the advent of reasoning models, where the AI will share its process of solving a problem in text, could make the process simpler. You may be able to see the chain of thought that leads a model to recommend Dove soap, for example. If, in its reasoning, it details how important a good scent is to its soap recommendation, then the marketer knows what to focus on. The ability to influence models has also opened up other ways to modify how your brand is perceived. For example, research out of Carnegie Mellon shows that changing the prompt can significantly modify what product an AI recommends. For example, take these two prompts: 1. Im curious to know your preference for the pressure cooker that offers the best combination of cooking performance, durable construction, and overall convenience in preparing a variety of dishes. 2. Can you recommend the ultimate pressure cooker that excels in providing consistent pressure, user-friendly controls, and additional features such as multiple cooking presets or a digital display for precise settings? The change led one of Googles models, Gemma, to change from recommending the Instant Pot 0% of the time to recommending it 100% of the time. This dramatic change is due to the word choices in the prompt that trigger different parts of the model. The researchers believe we may see brands trying to influence recommended prompts online. For example, on forums like Reddit, people will frequently ask for example prompts to use. Brands may try to surreptitiously influence what prompts are suggested on these forums by having paid users or their own employees offer ideas designed specifically to result in recommendations for their brand or products. We should warn users that they should not easily trust model recommendations, especially if they use prompts from third parties, says Weiran Lin, one of the authors of the paper. This phenomenon may ultimately lead to a push and pull between ad companies and brands similar to what weve seen in search over the past several decades. Its always a cat-and-mouse game, says Smyth. Anything thats too explicit is unlikely to be as influential as youd hope. Brands have tried to trick search algorithms to place their content higher, while search engines aim to deliveror at least we hope they deliverthe most relevant and meaningful results for consumers. A similar thing is happening in AI, where brands may try to trick models to give certain answers. Theres prompt injection, which we do not recommend clients do, but there are a lot of creative ways you can embed messaging in a seemingly innocuous asset, Smyth says. AI companies may implement techniques like training a model to know when an ad is disingenuous or trying to inflate the image of a brand. Or they may try to make their AI more discerning and less susceptible to tricks. Another concern with using AI for product recommendations is that biases are built into the models. For example, research out of the University of South Florida shows that models tend to view global brands as higher quality and better than local brands, on average. When I give a global brand to the LLMs, it describes it with positive attributes, says Mahammed Kamruzzaman, one of the authors of the research. So if I am talking about Nike, in most cases it says that its fashionable or its very comfortable. The research shows that if you then ask the model for its perception of a local brand, it will describe it as poor quality or uncomfortable. Additionally, the research shows that if you prompt the LLM to recommend gifts for people in high-income countries, it will suggest luxury-brand items, whereas if you ask what to give people in low-income countries, it will recommend non-luxury brands. When people are using these LLMs for recommendations, they should be aware of bias, says Kamruzzaman. AI can also serve as a focus group for brands. Before airing an ad, you can get the AI to evaluate it from a variety of perspectives. You can specify the audience for your ad, says Smyth. One of our clients called it their gen-AI gut check. Even before they start making the ad, they say, Ive got a few different ways I could be thinking about going to market. Lets just check with the models. Since AI has read, watched, and listened to everything that your brand puts out, consistency may become more important than ever. Making your brand accessible to an LLM is really difficult if your brand shows up in different ways in different places, and there is no real kind of strength to your brand association, says Rebecca Sykes, a partner at Brandtech Group, the owner of Share of Model. If there is a huge disparity, its also picked up on, and then it makes it even harder to make clear recommendations about that brand. Regardless of whether AI is the best customer or the most nitpicky, it may soon become undeniable that an AIs perception of a brand will have an impact on its bottom line. Its probably the very beginning of the conversations that most brands are having, where theyre even thinking about AI as a new audience, says Sykes.
    0 Yorumlar ·0 hisse senetleri ·44 Views
  • Congress used to evaluate emerging technologies. Lets do it again.
    www.technologyreview.com
    At about the time when personal computers charged into cubicle farms, another machine muscled its way into human resources departments and became a staple of routine employment screenings. By the early 1980s, some 2 million Americans annually found themselves strapped to a polygrapha metal box that, in many peoples minds, detected deception. Most of those tested were not suspected crooks or spooks. Then the US Office of Technology Assessment, an independent office that had been created by Congress about a decade earlier to serve as its scientific consulting arm, got involved. The office reached out to Boston University researcher Leonard Saxe with an assignment: Evaluate polygraphs. Tell us the truth about these supposed truth-telling devices. And so Saxe assembled a team of about a dozen researchers, including Michael Saks of Boston College, to begin a systematic review. The group conducted interviews, pored over existing studies, and embarked on new lines of research. A few months later, the OTA published a technical memo, Scientific Validity of Polygraph Testing: A Research Review and Evaluation. Despite the tests widespread use, the memo dutifully reported, there is very little research or scientific evidence to establish polygraph test validity in screening situations, whether they be preemployment, preclearance, periodic or aperiodic, random, or dragnet. These machines could not detect lies. Four years later, in 1987, critics at a congressional hearing invoked the OTA report as authoritative, comparing polygraphs derisively to tea leaf reading or crystal ball gazing. Congress soon passed strict limits on the use of polygraphs in the workplace. Over its 23-year history, the OTA would publish some 750 reportslengthy, interdisciplinary assessments of specific technologies that proposed means of maximizing their benefits and minimizing harms. Their subjects included electronic surveillance, genetic engineering, hazardous-waste disposal, and remote sensing from outer space. Congress set its course: The office initiated studies only at the request of a committee chairperson, a ranking minority leader, or its 12-person bipartisan board. The investigations remained independent; staffers and consultants from both inside and outside government collaborated to answer timely and sometimes politicized questions. The reports addressed worries about alarming advances and tamped down scary-sounding hypotheticals. Some of those concerns no longer keep policymakers up at night. For instance, Do Insects Transmit AIDS? A 1987 OTA report correctly suggested that they dont. The office functioned like a debunking arm. It sussed out the snake oil. Lifted the lid on the Mechanical Turk. The reports saw through the alluring gleam of overhyped technologies. In the years since its unceremonious defunding, perennial calls have gone out: Rouse the office from the dead! And with advances in robotics, big data, and AI systems, these calls have taken on a new level of urgency. Like polygraphs, chatbots and search engines powered by so-called artificial intelligence come with a shimmer and a sheen of magical thinking. And if were not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. A resurrected OTA might be the perfect body to rein in dangerous and dangerously overhyped technologies. Thats what Congress needs right now, says Ryan Calo at the University of Washingtons Tech Policy Lab and the Center for an Informed Public, because otherwise Congress is going to, like, take Sam Altmans word for everything, or Eric Schmidts. (The CEO of OpenAI and the former CEO of Google have both testified before Congress.) Leaving it to tech executives to educate lawmakers is like having the fox tell you how to build your henhouse. Wasted resources and inadequate protections might be only the start. A man administers a lie detector test to a job applicant in 1976. A 1983 report from the OTA debunked the efficacy of polygraphs.LIBRARY OF CONGRESS No doubt independent expertise still exists. Congress can turn to the Congressional Research Service, for example, or the National Academies of Sciences, Medicine, and Engineering. Other federal entities, such as the Office of Management and Budget and the Office of Science and Technology Policy, have advised the executive branch (and still existed as we went to press). But theyre not even necessarily specialists, Calo says, and what theyre producing is very lightweight compared to what the OTA did. And so I really think we need OTA back. What exists today, as one researcher puts it, is a diffuse and inefficient system. There is no central agency that wholly devotes itself to studying emerging technologies in a serious and dedicated way and advising the countrys 535 elected officials about potential impacts. The digestible summaries Congress receives from the Congressional Research Service provide insight but are no replacement for the exhaustive technical research and analytic capacity of a fully staffed and funded think tank. Theres simply nothing like the OTA, and no single entity replicates its incisive and instructive guidance. But theres also nothing stopping Congress from reauthorizing its budget and bringing it back, except perhaps the lack of political will. Congress Smiles, Scientists Wince The OTA had not exactly been an easy sell to the research community in 1972. At the time, it was only the third independent congressional agency ever established. As the journal Science put it in a headline that year, The Office of Technology Assessment: Congress Smiles, Scientists Wince. One researcher from Bell Labs told Science that he feared legislators would embark on a clumsy, destructive attempt to manage national R&D, but mostly the cringe seemed to stem from uncertainty about what exactly technology assessment entailed. The OTAs first report, in 1974, examined bioequivalence, an essential part of evaluating generic drugs. Regulators were trying to figure out whether these drugs could be deemed comparable to their name-brand equivalents without lengthy and expensive clinical studies demonstrating their safety and efficacy. Unlike all the OTAs subsequent assessments, this one listed specific policy recommendations, such as clarifying what data should be required in order to evaluatea generic drug and ensure uniformity and standardization in the regulatory approval process. The Food and Drug Administration later incorporated these recommendations into its own submission requirements. From then on, though, the OTA did not take sides. The office had not been set up to advise Congress on how to legislate. Rather, it dutifully followed through on its narrowly focused mandate: Do the research and provide policymakers with a well-reasoned set of options that represented a range of expert opinions. Perhaps surprisingly, given the rise of commercially available PCs, in the first decade of its existence the OTA produced only a few reports on computing. One 1976 report touched on the automated control of trains. Others examined computerized x-ray imaging, better known as CT scans; computerized crime databases; and the use of computers in medical education. Over time, the offices output steadily increased, eventually averaging 32 reports a year. Its budget swelled to $22 million; its staff peaked at 143. While its sometimes said that the future impact of a technology is beyond anyones imagination, several findings proved prescient. A 1982 report on electronic funds transfer, or EFT, predicted that financial transactions would increasingly be carried out electronically (an obvious challenge to paper currency and hard-copy checks). Another predicted that email, or what was then termed electronic message systems, would disrupt snail mail and the bottom line of the US Postal Service. In vetting the digital record-keeping that provides the basis for routine background checks, the office commissioned a study that produced a statistic still cited today, suggesting that only about a quarter of the records sent to the FBI were complete, accurate, and unambiguous. It was an indicator of a growing issue: computational systems that, despite seeming automated, are not free of human bias and error. Many of the OTAs reports focus on specific events or technologies. One looked at Love Canal, the upstate New York neighborhood polluted by hazardous waste (a disaster, the report said, that had not yet been remediated by the Environmental Protection Agencys Superfund cleanup program); another studied the Boston Elbow, a cybernetic limb (the verdict: decidedly mixed). The office examined the feasibility of a water pipeline connecting Alaska to California, the health effects of the Kuwait oil fires, and the news medias use of satellite imagery. The office also took on issues we grapple with todayevaluating automatic record checks for people buying guns, scrutinizing the compensation for injuries allegedly caused by vaccines, and pondering whether we should explore Mars. The OTA made its biggest splash in 1984, when it published a background report criticizing the Strategic Defense Initiative (commonly known as Star Wars), a pet project of the Reagan administration that involved several exotic missile defense systems. Its lead author was the MIT physicist Ashton Carter, later secretary of defense in the second Obama administration. And the report concluded that a perfect or near-perfect system to defend against nuclear weapons was basically beyond the realm of the plausible; the possibility of deployment was so remote that it should not serve as the basis of public expectation or national policy. The report generated lots of clicks, so to speak, especially after the administration claimed that the OTA had divulged state secrets. These charges did not hold up and Star Wars never materialized, although there have been recent efforts to beef up the militarys offensive capacity in space. But for the work of an advisory body that did not play politics, the report made a big political hubbub. By some accounts, its subsequent assessments became so neutral that the office risked receding to the point of invisibility. From a purely pragmatic point of view, the OTA wrote to be understood. A dozen reports from the early 90s received Blue Pencil Awards, given by the National Association of Government Communicators for superior government communication products and those who produce them. None are copyrighted. All were freely reproduced and distributed, both in print and electronically. The entire archive is stored on CD-ROM, and digitized copies are still freely available for download on a website maintained by Princeton University, like an earnest oasis of competence in the cloistered world of federal documents. Assessments versus accountability Looking back, the office took shape just as debates about technology and the law were moving to center stage. While the gravest of dangers may have changed in form and in scope, the central problem remains: Laws and lawmakers cannot keep up with rapid technological advances. Policymakers often face a choice between regulating with insufficient facts and doing nothing. In 2018, Adam Kinzinger, then a Republican congressman from Illinois, confessed to a panel on quantum computing: I can understand about 50% of the things you say. To some, his admission underscored a broader tech illiteracy afflicting those in power. But other commentators argued that members of Congress should not be expected to know it allall the more reason to restaff an office like the OTA. A motley chorus of voices have clamored for an OTA 2.0 over the years. One doctor wrote that the office could help address the discordance between the amount of money spent and the actual level of health. Tech fellows have said bringing it back could help Congress understand machine learning and AI. Hillary Clinton, as a Democratic presidential hopeful, floated the possibility of resurrecting the OTA in 2017. But Meg Leta Jones, a law scholar at Georgetown University, argues that assessing new technologies is the least of our problems. The kind of work the OTA did is now done by other agencies, such as the FTC, FCC, and National Telecommunications and Information Administration, she says: The energy I would like to put into the administrative state is not on assessments, but its on actual accountability and enforcement. She sees the existing framework as built for the industrial age, not a digital one, and is among those calling for a more ambitious overhaul. There seems to be little political appetite for the creation of new agencies anyway. That said, Jones adds, I wouldnt be mad if they remade the OTA. No one can know whether or how future administrations will address AI, Mars colonization, the safety of vaccines, or, for that matter, any other emerging technology that the OTA investigated in an earlier era. But if the new administration makes good on plans to deregulate many sectors, its worth noting some historic echoes. In 1995, when conservative politicians defunded the OTA, they did so in the name of efficiency. Critics of that move contend that the office probably saved the government money and argue that the purported cost savings associated with its elimination were largely symbolic. Jathan Sadowski, a research fellow at Monash University in Melbourne, Australia, who has written about the OTAs history, says the conditions that led to its demise have only gotten more partisan, more politicized. This makes it difficult to envision a place for the agency today, he saysTheres no room for the kind of technocratic navet that would see authoritative scientific advice cutting through the noise of politics. Congress purposely cut off its scientific advisory arm as part of a larger shake-up led by Newt Gingrich, then the House Speaker, whose pugilistic brand of populist conservatism promised drain the swamptype reforms and launched what critics called a war on science. As a rationale for why the office was defunded, he said, We constantly found scientists who thought what they were saying was not correct. Once again, Congress smiled and scientists winced. Only this time it was because politicians had pulled the plug. Peter Andrey Smith, a freelance reporter, has contributed to Undark, the New Yorker, the New York Times Magazine, and WNYCs Radiolab.
    0 Yorumlar ·0 hisse senetleri ·44 Views
  • Nall McLaughlin submits plans for Maggies Cambridge
    www.architectsjournal.co.uk
    The 2022 RIBA Stirling Prize-winner has been working with the cancer support charity on the plans for the new permanent facility on the Addenbrookes Hospital campus for nearly five years.The 484m purpose-built structure will replace an existing, temporary Maggies Centre housed within a block built to accommodate key worker flats.The cancer organisations other award-winning schemes have been designed by a roster of architectures biggest names, including Zaha Hadid, Norman Foster, Amanda Levete and Daniel Libeskind.AdvertisementNall McLaughlins proposal will involve demolition of a two-storey NHS administration building. The practice had looked at both a total retention and retrofit, and a partial retention and extension. However, both were ruled out after early design studies.The split-level scheme has been designed with a pinwheel plan rotating around a central staircase, liftshaft and lightwell. An olive tree will be planted in this space.Each of the centres mono-pitched roofs feature high-level clerestory windows.The design team said the facility would provide people affected by cancer with comforting and inspiring spaces to decompress from the clinical hospital setting, seek support and take part in activities.The landscape proposal, drawn up by Tom Stuart-Smith Studio, aims to enhance the existing woodland setting to the north of the site as well as introduce a welcoming entrance approach and expanded south-facing garden.AdvertisementIn 2022 Nall McLaughlin won the RIBA Stirling Prize for his new library at Cambridges Magdalene College, three miles to the south of the Addenbrookes site.Construction work is expected to start next year.Project dataLocation Long Road, Addenbrooke's Hospital, Cambridge,CB2 0ADLocal authority Cambridge City CouncilType of project Cancer Support CentreClient Maggie'sArchitect Nall McLaughlin ArchitectsLandscape architect Tom Stuart-Smith StudioPlanning consultantBidwellsStructural engineer Smith & WallworkM&E consultant Skelley & Couch Skelly & CouchQuantity surveyor Gardiner & TheobaldGross internal floor area 484m
    0 Yorumlar ·0 hisse senetleri ·55 Views
  • Co-living pipeline booms with planning submissions up 87 per cent
    www.architectsjournal.co.uk
    Data from the real estate company reveals 9,000 co-living units were submitted for planning in the UK in 2024, compared with around 4,800 in 2023, and 6,200 were granted planning permission.Approximately 5,500 more, already consented units are also currently under construction to add to the UKs 9,000 existing operational units. According to Savills, delivery is expected to accelerate further throughout 2025 as inflation stabilises and investor confidence grows.The research was revealed in a c0-living spotlight report, which evaluated the co-living market compared with traditional private rented sector (PRS) stock across six major cities in the UK.AdvertisementSavills found cities will strong graduate retention rates London, Manchester and Birmingham among them are key markets for co-living developments.The report explained: Many graduates who are familiar with high-quality purpose-built student accommodation (PBSA) from during their studies, seek similar options as they begin their careers.In London, which has a 59 per cent graduate retention rate, 23 out of the citys 32 boroughs have now adopted or are in the process of developing policies on co-living.HTA Designs College Road development, a 50-storey tower of 817 co-living apartments dubbed Enclave: Croydon and a 35-storey tower of 120 affordable homesAnd Savills said the latest generation of UK co-living schemes had seen strong lease-up rates and high occupancy levels, particularly appealing to the 20-4o age group.Paul Wellman, associate director of residential research at Savills, described co-living as a vital addition to the UKs rental landscape.AdvertisementWellman added: With rising rental costs and a shrinking PRS, co-living offers a practical, high-quality housing option that delivers value for money while addressing the evolving needs of city renters.A sample of 11 operational schemes in London analysed by Savills, comprising a total of 2,700 units, showed all-inclusive co-living rents ranging from 1,550 to 1,750 per month.Lizzie Beagley, head of PBSA and co-living transactions for Savills Operational Capital Markets, said co-living was emerging as a distinct sub-sector within the wider Build to Rent market, and had attracted investors such as Cain International, Blackrock, Real Star, Crosstree, DTZIM, APG and CDL.Beagley added: The transactional evidence is still sparse, due to our still being in the development cycle of the market. However, we are seeing success from established operational portfolios such as DTZIM (Folk), Dandi, Vita (Union) and Scape (Morro) in some excellent second-generation co-living schemes.The UK co-living pipeline currently sits in contrast with the UK housing pipeline, with housing projects still lagging compared with last year, according to Allan Wilen, economic director at construction analyst Glenigan.Recent data released by the company found there were 12 per cent fewer housing starts in the three months to the end of January, compared with the same period in 2024, according to data gathered by construction analyst Glenigan.However, project starts crept up by 19 per cent between November 2024 and January this year, indicating quarter-on-quarter recovery, according to Glenigan.Private housing construction could grow by 13 per cent in 2025, and social housing could increase by 11 per cent, according to the companys forecast, provided the gains are sustained by a strong pipeline of new projects alongside further policy interventions in planning from the government to help unlock development.2025-02-19Anna Highfieldcomment and share
    0 Yorumlar ·0 hisse senetleri ·55 Views
  • High APYs Hold On -- for Now. Today's CD Rates, Feb. 19, 2025
    www.cnet.com
    Key takeaways Today's best CDs offer APYs as high as 4.65%.APYs are holding steady for now, but they won't last forever.Your APY is locked in when you open a CD, so opening one now can shield your earnings from future rate cuts. After months of tumbling, certificate of deposit rates seem to have leveled off for now, thanks to the Federal Reserve's latest rate pause. But if the past few years have taught us anything about CD rates, it's that timing is key when it comes to how much you can earn. And with a rate cut expected later this year, snagging a high rate while you still can is a smart move.You can earn up to 4.65% annual percentage yield with today's best CDs -- more than twice the national average for some terms. Read on to see some of the highest CD rates available now and how much you could earn by depositing $5,000.Today's best CD rates Term Highest APY*BankEstimated earnings6 months 4.65%CommunityWide Federal Credit Union$114.931 year 4.45%CommunityWide Federal Credit Union$222.503 years 4.15%America First Credit Union$648.695 years 4.25%America First Credit Union$1,156.73 Experts recommend comparing rates before opening a CD account to get the best APY possible. Enter your information below to get CNET's partners' best rate for your area.What's happening with CD ratesA CD can be a great place to stash your cash at any time, but in periods of inflation like today's, they can be especially lucrative. As the Federal Reserve raises interest rates to fight inflation, banks tend to follow suit, raising APYs on consumer products like CDs and savings accounts.If you open a CD while rates remain elevated, you can continue to enjoy the same high returns even when rates begin to fall because your APY is locked in when you open a CD.But don't wait too long to take advantage of today's APYs. While the Fed chose to pause rates at its January meeting, experts expect it to cut rates later this year, which means the clock is ticking."Short-term interest rates tend to fluctuate in anticipation of market changes, so even if the Fed doesn't lower rates immediately, we could still see CD rates begin to trend slightly downward," said Chad Olivier, Certified Financial Planner and CEO of The Olivier Group. "That said, with the Fed taking a more cautious, wait-and-see approach, CD rates and other safe-money options are likely to remain at these high levels for now."You can earn up to 5% APY on the best high-yield savings accounts. Check out top savings rates now.How CD rates have changed over the past week Term Last week's CNET average APYThis week's CNET average APYWeekly change**6 months 4.08%4.08%No change1 year 4.07%4.07%No change3 years 3.56%3.56%No change5 years 3.55%3.56%0.0028 What to look for in a CDA competitive APY is important, but it's not the only thing you should consider. To find the right CD for you, weigh these things, too:When you'll need your money: Early withdrawal penalties on CDs can eat into your interest earnings if you need your money before the term ends, so choose a timeline that makes sense. Alternatively, you can select a no-penalty CD, although the APY may not be as high as you'd get with a traditional CD of the same term.Minimum deposit requirement: Some CDs require a minimum deposit to open an account, typically $500 to $1,000. Knowing how much money you have to set aside can help you narrow your options.Fees: Maintenance and other fees can cut into your savings. Many online banks don't charge fees because they have lower overhead costs than banks with physical branches. Read the fine print for any account you're evaluating.Safety and security: Make sure the bank or credit union you're considering is an FDIC or NCUA member so your money is protected if the bank fails.Customer ratings and reviews: Visit sites like Trustpilot to see what customers are saying about the bank. You want a bank that's responsive, professional and easy to work with.MethodologyCNET reviews CD rates based on the latest APY information from issuer websites. We evaluated CD rates from more than 50 banks, credit unions and financial companies. We evaluate CDs based on APYs, product offerings, accessibility and customer service.The current banks included in CNET's weekly CD averages include Alliant Credit Union, Ally Bank, American Express National Bank, Barclays, Bask Bank, Bread Savings, Capital One, CFG Bank, CIT, Fulbright, Marcus by Goldman Sachs, MYSB Direct, Quontic, Rising Bank, Synchrony, EverBank, Popular Bank, First Internet Bank of Indiana, America First Federal Credit Union, CommunityWide Federal Credit Union, Discover, Bethpage, BMO Alto, Limelight Bank, First National Bank of America and Connexus Credit Union.*APYs as of Feb. 18, 2025, based on the banks we track at CNET. Earnings are based on APYs and assume interest is compounded annually.**Weekly percentage increase/decrease from Feb. 11, 2025, to Feb. 18, 2025.More on CDs
    0 Yorumlar ·0 hisse senetleri ·45 Views
  • Red Dye No. 3 Is Banned, but These 9 Foods Still Contain It
    www.cnet.com
    On Jan. 15, the US Food and Drug Administration officially revoked its authorization of Red Dye No. 3, a widely used food coloring that has been under scrutiny for decades. The ban comes more than 30 years after studies linked high doses of the additive to cancer in lab rats, setting the stage for its eventual removal from the food supply.The decision was prompted by a 2022 petition citing a clause in the 1960 FD&C Act, which mandates the prohibition of any substance proven to cause cancer in humans or animals. Despite this, researchers note that the hormonal process leading to cancer in rats exposed to Red No. 3 does not occur in humans. Even so, the ban marks a long-overdue shift in food safety regulations, finally eliminating an additive that had been flagged for potential risks decades ago.The state of California banned the Red No. 3 dye and three other food additives in 2023, which gave manufacturers until 2027 to change their recipes. Then in 2024, California once again banned six more artificial dyes -- Blue 1, Blue 2, Green 3, Red 40, Yellow 5 and Yellow 6 -- from being served in public schools.Although the FDA authorization was revoked, companies have years to change how they make their products, so the carcinogen may be an ingredient in foods for a while. Here's everything to know about foods that use the synthetic red dye. What is Red No. 3?Red No. 3 -- also known as FD&C Red No. 3, erythrosine or Red 3 -- is a synthetic dye that is made from petroleum and adds a "bright, cherry-red color" to the products it is added to. In 1990, the FDA banned Red No. 3 in cosmetics, but no law barred the synthetic dye from being added to numerous types of foods and drinks for decades to come.The FDA cited the Delaney Clause as its reasoning behind the ban, which "prohibits FDA authorization of a food additive or color additive if it has been found to induce cancer in humans or animals."Although studies did show a link to cancer in laboratory rats, a link between the dye and cancer in humans has not been found."While there are studies noting carcinogenicity in male rats, the FDA noted in their announcement that the hormonal mechanism through which the dye caused cancer in rats is specific to the animal and does not occur in humans," Bryan Hitchcock, chief science and technology officer of the Institute of Food Technologists, told CNET.Hitchcock added that the studies used large amounts of the dye, which is more than what the average human would consume when eating the foods that contain it."Studies testing Red No. 3 for human safety have done so at amounts well above the average amount of consumption, as noted by various global regulatory bodies," he says. "The studies referenced by the FDA note that the rats were given roughly 200 times the likely maximum daily consumption of .25 mg/kg of body weight per day."What foods contain Red No. 3?CandyCakesCupcakesCookiesFrozen dessertsFrostingsIcingsCertain Maraschino cherriesCertain processed meats and meat substitutes Red No. 3 has previously been banned in other countries, including Australia. Ali Majdfar/Getty ImagesSome specific items that currently have Red No. 3 on its ingredient list are:Numerous types of Brach's candy, including Classic Jelly Beans, Spiced Jelly Beans and Conversation HeartsMorningStar Farms Plant-Based Bacon StripsGood Humor Strawberry Shortcake Frozen Dessert BarsPez candyAccording to a list compiled by Drugs.com, some of the drugs that have Red No. 3 in them include:AcetaminophenDoxycycline MonohydrateGabapentinVyvanseThe Environmental Working Group has compiled a searchable database of food products that use the now banned dye. As of Feb. 5, 2025, the site had collected 3,092 products that list Red No. 3 as an ingredient.When do companies need to remove Red No. 3 from products?Despite the ban, don't expect to see the Red No. 3 ingredient disappear from ingredient lists too quickly. According to the FDA, companies will have until 2027 or 2028 to remove it from their products."Manufacturers who use FD&C Red No. 3 in food and ingested drugs will have until January 15, 2027, or January 18, 2028, respectively, to reformulate their products," the FDA statement reads.What will replace Red No. 3?Givaudan Sense Colour, a manufacturing company that creates natural food and drink colorings, highlighted three possible alternatives to Red No. 3-- carmine, which is actually made from bugs; betacyanins, found in beetroots; and anthocyanins, derived from fruits and vegetables.California Assembly member Jesse Gabriel told NBC News that although synthetic dyes can be cheaper than other alternatives, he does not believe that the Red No. 3 ban will cause prices of the affected products to change."We don't expect the price of any food to increase," he told the outlet. As for alternative synthetic dyes,Red 40, which is not banned by the FDA, can also help achieve a bright red color, so it is also a possible alternative that manufacturers will choose.Are other chemical food colorings safe?After the Red No. 3 ban, there are now eight color additives approved by the FDA. They are FD&C Blue No. 1, FD&C Blue No. 2, FD&C Green No. 3, Orange B, Citrus Red No. 2, FD&C Red No. 40, FD&C Yellow No. 5 and FD&C Yellow No. 6.Hitchcock says that so far, studies show that there is not a notable risk to consuming these dyes."While science tells us that there is little to no risk in consuming other synthetic dyes, it is important that we continue to monitor and evaluate food ingredient safety," he says. "It is paramount that we continue to invest in more scientific research around the health of our foods to ensure safety and provide peace of mind for consumers."According to the FDA, the above dyes do not pose the same possible risks as Red No. 3, which is why they are still available for use in the US. But, some studies show possible links between certain dyes and potential health conditions. For example, some studies have linked Red 40 to hyperactivity, according to the Cleveland Clinic, but further studies are still needed to determine a direct link between the dye and the condition.When asked about the safety of other food dyes, Hitchcock highlighted the need for transparency from the FDA, which he says the agency has been addressing."We believe there needs to be a clear framework for post-market review for food additive safety," Hitchcock says. "The FDA is actively working to address this issue as seen in their Development of an Enhanced Systematic Process for the FDA's Post-Market Assessment of Chemicals in Food. IFT believes that the FDA needs to bring forward a post-market assessment of chemical food safety that is transparent, scientifically grounded, constituent informed and timely." If a drink looks too red to be natural, it probably is. vlad.plus/Getty ImagesThe bottom line on Red No. 3Red No. 3 has been fully banned in the US, but it will continue to be used in food for the next two years as manufacturers work to change their recipes. However, some manufacturers are making changes much more quickly than that.In an email to CBS News, Keurig Dr Pepper said that a "new formula" for Yoo-hoo Strawberry Flavored Drink, which is currently made with Red No. 3 to help achieve its color, "will be on shelves before the end of the year."
    0 Yorumlar ·0 hisse senetleri ·48 Views
  • Marvel Rivals publisher NetEase responds to shocking layoffs, claims they were to "optimise development efficiency"
    www.eurogamer.net
    NetEase has released a statement following the shock Marvel Rivals layoffs last night.Marvel Rivals - the free-to-play shooter based on characters from within the Marvel universe - has been a huge success since its release in December. On its opening weekend just two months ago, the game welcomed an astonishing 10 million players to the fray. Meanwhile, at this very moment in time there are still 121,646 heroes making their way in the game via Steam. And Marvel Rivals hasn't just boasted high player numbers. A report in January estimated that Marvel Rivals had made over $130m in revenue during its first month.However, despite this success, which is even more impressive when you consider how other online shooter games have struggled to get off the mark recently, NetEase laid off an unspecified number of the Marvel Rivals development team last night, including game director Thaddeus Sasser. The news was broken by Sasser himself, before other members of the development team went on to share word of their situation.To see this content please enable targeting cookies. Does Anyone Really Want Long Games Anymore? Watch on YouTubeThe Marvel Rivals publisher has now broken its silence on the layoffs, calling them a "difficult decision" made to "optimise development efficiency" for the game."This resulted in a reduction of a design team based in Seattle that is part of a larger global design function in support of Marvel Rivals," NetEase said. "We appreciate the hard work and dedication of those affected and will be treating them confidentially and respectfully with recognition for their individual contributions."It added that the "core" development team in China "remains fully committed to delivering an exceptional experience" for Marvel Rivals players."We are investing more, not less, into the evolution and growth of this game," NetEase continued. "We're excited to deliver new super hero characters, maps, features, and content to ensure an engaging live service experience for our worldwide player base."To see this content please enable targeting cookies.As reported last night, the job cuts within Marvel Rivals' western-based development team follow a recent pattern by NetEase, which has made broader reconsiderations to its overseas investments and studios.In November of last year, Mac Walters - the veteran writer and producer who worked on Mass Effect for almost two decades - announced a "pause" in operations at his NetEase-backed AAA game studio, Worlds Untold. Then, last month, Jar of Sparks - the Seattle-based "AAA" studio established by Halo Infinite head of design Jerry Hook back in 2022 - halted work its currently-unannounced first title as it searched for a new publishing partner, stating it's looking to "find all of our team new homes" as a result.More recently, the NetEase-backed and Sweden-based developer Liquid Swords announced an unspecified number of layoffs, before it had even released its first game.
    0 Yorumlar ·0 hisse senetleri ·44 Views