• Phil Spencer on multiplatform strategy: "There's no reason to put a ring fence around any game"
    www.gamesindustry.biz
    Phil Spencer on multiplatform strategy: "There's no reason to put a ring fence around any game"Microsoft also planning to support Switch 2 with ports of Xbox games News by Sophie McEvoy Staff Writer Published on Jan. 27, 2025 Microsoft's Phil Spencer revealed further details of Xbox's changing brand identity, intending to make its first-party titles available across multiple platforms.In an interview with independent games journalist Destin Legarie (via Eurogamer), Spencer said that Xbox is moving away from first-party titles like Starfield and Indiana Jones having an "exclusivity window"."There's no reason for me to put a ring fence around any game and say this game will not go to a place where it would find players, where it would have business success for us," Spencer said. "Our strategy is to allow our games to be available.""Game Pass is an important component of playing the games on our platform, but to keep games off other platforms? That's not a path for us; that doesn't work for us."He explained that Microsoft wants Xbox to be the platform that enables "the world's biggest games" to be available in "multiple places.""We think that's what makes us unique," he said. "Most of the other platforms out there are single platforms on a single device, whether that's on PC, mobile, or console. And we want Xbox to be a platform that enables creators across any screen that people want to play on."When asked whether this would cause Xbox's brand identity to evolve, Spencer said it is forever changing and adapting to the industry."This is all about making sure that your library of games that you own on Xbox are playable in multiple places, so I'd say it's an evolution of our identity. But I believe it's an identity that the industry needs."When you think about where this industry is now and you see the challenges, the business challenges that are out there for many companies, I think making games more accessible to more people has got to be front and centre for us as an industry."In another interview with Gamertag Radio, Spencer also confirmed Microsoft is planning to support the Nintendo Switch 2 with ports of Xbox games."I'm really looking forward to supporting [Nintendo] with games that we have," he said. "They're a really important part of this industry."Last February, Microsoft announced it would be bringing four games to other consoles as a step in its multiplatform strategy."We don't damage Xbox and we can grow our business using what other platforms have to help us with that," Spencer said. "Looking forward, I think there is an interesting story for us introducing Xbox franchises to players on other platforms to get them more interested in Xbox. We think there's a good brand value for Xbox there."
    0 Commentaires ·0 Parts ·9 Vue
  • iPhone SE 4 appears in new photos and video, notch and all
    www.theverge.com
    We might have just gotten our best look yet at Apples next affordable iPhone SE, shown in both video and photographs of whats either a real phone or a convincing dummy unit. Despite some reports that the next SE would adopt recent iPhones Dynamic Island design, this model appears to stick with the older notch.Leaker Majin Bu shared a short video over the weekend that shows the new phone in bright daylight, following it up a day later with photos of both white and black versions. Like previous iPhone SE models theres only a single rear camera, though this appears to be the first in the line to feature a USB-C port now a requirement for the phone to be sold in the EU.The biggest surprise is that the phone features Apples older notched display, rather than the Dynamic Island design that leaker Evan Blass had tipped it to include. This is hard to make out clearly in the video, but the selfie cameras position just left of centre matches the iPhone 14s notched setup and besides, the leaker himself has confirmed in replies to the post that theres a notch.This isnt our first look at the new SE, though it is our clearest. Sonny Dickson shared two photos of similar looking SE 4 dummy units two weeks ago that he then put on sale.The SE 4 is rumored to switch to an OLED display, and is expected to include enough RAM to support Apple Intelligence features. Rumors point to a launch around March or April, which makes sense the SE 3 launched in March 2022.
    0 Commentaires ·0 Parts ·7 Vue
  • Oscars: The Case for How Conclave Wins Best Picture
    www.denofgeek.com
    It might seem strange to say that Conclave, a movie nominated for eight Oscars including Best Picture and Best Actor, is becoming something of a Best Picture long shot but conventional wisdom suggests Conclave is now something of a Best Picture long shot. While the Vatican thriller received a host of nominations, including a Best Adapted Screenplay nod for Peter Straughan and Best Film Editing for Nick Emerson, the films absence was conspicuously noted in the Best Director race where James Mangold got in for A Complete Unknown over Edward Berger. It was also snubbed in Best Cinematography where, to more than a few folks surprise, Emilia Prez was nominated instead.The ascent of Emilia Prez has indeed frightened some corners of social media, which has lashed back hard at the new frontrunner status applied to the Jacques Audiard film. As the horse race narrative emerges, folks are already leaning one way or the other on if they think this will finally be Netflixs year thanks to Emilia Prez, or if the critical esteem previously accumulated by Brady Corbet and A24s The Brutalistwhich received 10 nominations, including for Directing and Cinematographywill carry it through. Meanwhile Conclave falls further into the rearview.And yetyet, dear readerwhile its good to take all this with some healthy skepticism, there is still a path forward for Bergers movie about a papal conclave where progressives and conservatives come to blows, but then in the end do the right thing and elect a forward-looking new leader (you know: wish fulfillment). After all, underdogs have done it before.In the last 25 years, there have been exactly three Best Picture winners that earned that most coveted of Oscar golds without being nominated for Director and/or Cinematography: Argo (2012), Green Book (2018), and CODA (2021). And its the last one that is of special interest to me.Three years ago marked the first awards season after quarantine restrictions were lifted. Nonetheless, it was still an odd and somewhat muted Oscar season due to productions freezing in Hollywood the year before. As a consequence, the Oscar field seemed strangely muted in January 2022 with the initial frontrunner being a critical favorite that was awarded Best Picture by prestigious critics groups like the New York Film Critics Circle, as well as a Netflix release: Jane Campions The Power of the Dog. The critical and media narrative was that it had to win. Campion was the clear frontrunner for Best Director, and what could beat it for Picture? (Personally, we far preferred Steven Spielbergs West Side Story, but that musical flopped, so it was a nonstarter with the Academy).Come Oscar night, the Academy again found a way to deny Netflix a Best Picture Oscar, but only in lieu of giving the top award to another streaming film, Apple and Sian Heders feel-good and easily accessible CODA. It was also a film nominated for only two other awards, Best Supporting Actor and Best Adapted Screenplay. It won both.While CODA has arguably gone down as one of the slighter Best Picture winners in recent memory, it also was the antithesis of The Power of the Dog: hopeful, earnest, and ultimately reassuring in a dark moment that things were going to be alright. It also was not a Netflix movie, and while Apple is in the streaming business, only Netflix has seen leadership state more than once that they do not consider theatrical their business model.Cut back to 2025. We again are in a bit of a slow-down year after film production dried up in 2023 due to the writer and actor strikes. The mood is also once more mournful in much of the country, especially California. The general sense in the industry and media is that 2024 was a weaker year for cinema than the past several, and one of the frontrunners which is again a critical darling and NYFCC Best Picture winner is a dark, cynical, and ultimately dispiriting auteur piece: The Brutalist. Furthermore, according to Variety, a not-insignificant number of Academy members are refusing to watch the 3.5-hour epic. Yes, there is another frontrunner which has a more hopeful and optimistic message, the lightning rod musical Emilia Prez, but that film is still a Netflix release. And this one comes with a lot of online controversy and baggage.Admittedly, online backlash didnt thwart Green Books chances in early 2019. However, its biggest competitor was, guess what, another Netflix film in Alfonso Cuarns lyrical Roma. Also it seems online sentiment did play a role in La La Lands stumble two years earlier, a film so securely seen as the frontrunner that when it was incorrectly named Best Picture at the 2017 Academy Awards, no one at home or in the audience guessed anything was amiss.Lastly, one other major factor to remember is that Best Picture, unlike all other categories at the Oscars, are voted on by way of preferential ballot. This means that instead of just picking their favorite, each voter ranks the nominees in order from their most desirous to their least for taking home the top prize. And it seems likely while Emilia Prez will receive a lot of number one votes, it could end up either number nine or 10 on plenty of other ballots too.Meanwhile that same preferential system could mean a movie that ends up as a lot of folks number two choiceand with a decent amount of number onescould still seize the day and win Best Picture. This brings us back to Conclave, a political thriller that plays exceedingly well with older audiences who remember when such performance and narrative-heavy entertainments were part and parcel of any Oscar season. In fact, the AARP nominated Conclave more than any other film in its annual Movies for Grownups Awards. Another way to put this is that Boomers really like a movie where the cast is uniformly north of 50 and doing exciting things. And while the demographics of current Academy membership is a bit hard to pick out since inclusion initiatives in the late 2010s roughly doubled membership numbers, at least in 2012 the median age of Academy members was 62.Join our mailing listGet the best of Den of Geek delivered right to your inbox!This is a long way to say that Conclave is a real crowdpleaser for a large portion of Oscar voters, and unlike The Brutalist or even Emilia Prez, it leaves an audience feeling energized instead of wistful or pensive. It could become a consensus pick for a lot of voters who put Brutalist or Prez, or Anora as their number one, but an assortment of the other frontrunners near the bottom.In this vein, well point to one other relatively recent Oscar year. In January 2016, there was once again no clear frontrunner. Many critics groups were lining up behind the artful and reserved Todd Haynes film Carol, but Haynes has always been an acquired taste that the Academy rarely sips from.Other groups were championing Mad Max: Fury Road, a tour de force in action cinema that remains one of the best chase movies ever conceived. It also was a franchise blockbuster and a sequel. Unsurprisingly, it virtually swept the technical categories that year at the Oscars, but was rudely ignored for Picture and Director. Finally, there was a faction lining up behind The Revenant, including the Golden Globes. The Revenant also was another high-concept auteur piece from Alejandro Gonzlez Irritu, this time a Western after he won Best Picture and Director the previous year for Birdman.In the end, Gonzlez triumphed once more in Best Director, but to manys surprise it was neither The Revenant, nor Mad Max, nor Carol that took home the greatest bauble. It was Spotlight, an ensemble piece with excellent performances, a propulsive pacing reflecting our real world, and an unfussy narrative that left grownup audiences satisfied. And it likely got there with a lot of number two and three votes.It could happen again.
    0 Commentaires ·0 Parts ·7 Vue
  • Kerblam! Writer Back for Doctor Who Series 15 Plus Three Newcomers
    www.denofgeek.com
    Making his Doctor Who debut will be British-Nigerian poet and playwright Inua Ellams MBE (Barber Shop Chronicles, Candy Coated Unicorns & Converse All Stars) , whose Doctor Who fandom goes back to the age of 10. Not underselling the importance of joining the TV shows writers, Ellams described the experience as like touching God.Also making their Doctor Who TV debut is Manchester-based screenwriter Sharma Angel-Walfall, whose television credits include recent releases from Netflix, Disney+, Sky and the BBC, from Supacell to The Ballad of Renegade Nell to Dreamland (coincidentally starring former Doctor Who companion Freema Agyeman). Angel-Walfall won the very first Channel 4 New Writing Award, and is a massive Russell T Davies fan.Finally, novelist and screenwriter Juno Dawson (Her Majestys Royal Coven, This Book is Gay) has made the move from Doctor Who podcasts and books to the main TV show. A familiar name to Who fans, Dawson wrote 2018s 13th-era novel The Good Doctor, created the first official Doctor Who scripted podcast in Doctor Who: Redacted, wrote for the Torchwood and New Series Adventures podcasts, and was a co-presenter on The Official Doctor Who podcast for the shows three 60th anniversary specials.Expect many more Who announcements to arrive over the next few months as the BBC and Disney+ start to hype what Davies is describing as the most wild and exciting season of Doctor Who yet. Well, hed know. Heres the teaser:Doctor Who series 15 is expected to air on BBC One and Disney+ in spring 2025.
    0 Commentaires ·0 Parts ·7 Vue
  • Do We Really Need The OWASP NHI Top 10?
    thehackernews.com
    Jan 27, 2025The Hacker NewsApplication Security / API SecurityThe Open Web Application Security Project has recently introduced a new Top 10 project - the Non-Human Identity (NHI) Top 10. For years, OWASP has provided security professionals and developers with essential guidance and actionable frameworks through its Top 10 projects, including the widely used API and Web Application security lists. Non-human identity security represents an emerging interest in the cybersecurity industry, encompassing the risks and lack of oversight associated with API keys, service accounts, OAuth apps, SSH keys, IAM roles, secrets, and other machine credentials and workload identities. Considering that the flagship OWASP Top 10 projects already cover a broad range of security risks developers should focus on, one might ask - do we really need the NHI Top 10? The short answer is - yes. Let's see why, and explore the top 10 NHI risks. Why we need the NHI Top 10While other OWASP projects might touch on related vulnerabilities, such as secrets misconfiguration, NHIs and their associated risks go well beyond that. Security incidents leveraging NHIs don't just revolve around exposed secrets; they extend to excessive permissions, OAuth phishing attacks, IAM roles used for lateral movement, and more. While crucial, the existing OWASP Top 10 lists don't properly address the unique challenges NHIs present. Being the critical connectivity enablers between systems, services, data, and AI agents, NHIs are extremely prevalent across development and runtime environments, and developers interact with them at every stage of the development pipeline. With the growing frequency of attacks targeting NHIs, it became imperative to equip developers with a dedicated guide to the risks they face.Understanding the OWASP Top 10 ranking criteriaBefore we dive into the actual risks, it's important to understand the ranking behind the Top 10 projects. OWASP Top 10 projects follow a standard set of parameters to determine risk severity:Exploitability: Evaluate how easily an attacker can exploit a given vulnerability if the organization lacks sufficient protection.Impact: Considers the potential damage the risk could inflict on business operations and systems.Prevalence: Assesses how common the security issue is across different environments, disregarding existing protective measures.Detectability: Measures the difficulty of spotting the weakness using standard monitoring and detection tools.Breaking down the OWASP NHI Top 10 risksNow to the meat. Let's explore the top risks that earned a spot on the NHI Top 10 list and why they matter:NHI10:2025 - Human Use of NHINHIs are designed to facilitate automated processes, services, and applications without human intervention. However, during the development and maintenance phases, developers or administrators may repurpose NHIs for manual operations that should ideally be conducted using personal human credentials with appropriate privileges. This can cause privilege misuse, and, if this abused key is part of an exploit, it's hard to know who is accountable for it. NHI9:2025 - NHI ReuseNHI reuse occurs when teams repurpose the same service account, for example, across multiple applications. While convenient, this violates the principle of least privilege and can expose multiple services in the case of a compromised NHI - increasing the blast radius. NHI8:2025 - Environment IsolationA lack of strict environment isolation can lead to test NHIs bleeding into production. A real-world example is the Midnight Blizzard attack on Microsoft, where an OAuth app used for testing was found to have high privileges in production, exposing sensitive data.NHI7:2025 - Long-Lived SecretsSecrets that remain valid for extended periods pose a significant risk. A notable incident involved Microsoft AI inadvertently exposing an access token in a public GitHub repository, which remained active for over two years and provided access to 38 terabytes of internal data.NHI6:2025 - Insecure Cloud Deployment ConfigurationsCI/CD pipelines inherently require extensive permissions, making them prime targets for attackers. Misconfigurations, such as hardcoded credentials or overly permissive OIDC configurations, can lead to unauthorized access to critical resources, exposing them to breaches.NHI5:2025 - Overprivileged NHIMany NHIs are granted excessive privileges due to poor provisioning practices. According to a recent CSA report, 37% of NHI-related security incidents were caused by overprivileged identities, highlighting the urgent need for proper access controls and least-privilege practices. NHI4:2025 - Insecure Authentication MethodsMany platforms like Microsoft 365 and Google Workspace still support insecure authentication methods like implicit OAuth flows and app passwords, which bypass MFA and are susceptible to attacks. Developers are often unaware of the security risks of these outdated mechanisms, which leads to their widespread use, and potential exploitation. NHI3:2025 - Vulnerable Third-Party NHIMany development pipelines rely on third-party tools and services to expedite development, enhance capabilities, monitor applications, and more. These tools and services integrate directly with IDEs and code repos using NHIs like API keys, OAuth apps, and service accounts. Breaches involving vendors like CircleCI, Okta, and GitHub have forced customers to scramble to rotate credentials, highlighting the importance of tightly monitoring and mapping these externally owned NHIs. NHI2:2025 - Secret LeakageSecret leakage remains a top concern, often serving as the initial access vector for attackers. Research indicates that 37% of organizations have hardcoded secrets within their applications, making them prime targets.NHI1:2025 - Improper OffboardingRanked as the top NHI risk, improper offboarding refers to the prevalent oversight of lingering NHIs that were not removed or decommissioned after an employee left, a service was removed, or a third party was terminated. In fact, over 50% of organizations have no formal processes to offboard NHIs. NHIs that are no longer needed but remain active create a wide array of attack opportunities, especially for insider threats. A standardized framework for NHI securityThe OWASP NHI Top 10 fills a critical gap by shedding light on the unique security challenges posed by NHIs. Security and development teams alike lack a clear, standardized view of the risks these identities pose, and how to go about including them in security programs. As their usage continues to expand across modern applications, projects like the OWASP NHI Top 10 become more crucial than ever. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentaires ·0 Parts ·9 Vue
  • AI Projects at the Edge: How To Plan for Success
    www.informationweek.com
    Przemysaw Krokosz, Edge and Embedded Technology Solutions Expert,MobicaJanuary 27, 20255 Min ReadDragos Condrea via Alamy StockArtificial Intelligence continues to gain traction as one of the hottest areas in the technology sector. To meet AIs requirements for processing power we are seeing a race by US vendors to establish data centers worldwide. Google recently announced a $1 billion investment in cloud infrastructure in Thailand, which was followed almost immediately by Oracles promise of $6.5bn in Malaysia. Added to this are many similar ventures in Europe, all under the flag of AI development. Its hardly surprising then that people thinking about AI investment, typically think of a cloud-based project. Yet, we are also seeing significant growth in AI deployments at the edge, and theres good reason for this. The Case for the EdgeTwo of the most compelling reasons are the superiority of speed and security that edge computing can offer. Edges freedom from dependence on connectivity provides low latency and makes it possible to create air gaps through which cyber criminals cannot penetrate. These are both vitally important issues. Speed is of the essence in many applications -- in hospitals, industrial sites or transportation, for example. A delay in machine calculations in a critical care unit is literally a matter of life and death. The same applies to an autonomous vehicle detecting an imminent collision. Theres no time for the technology to wait for a cellular connection. Related:Meanwhile, cybercrime increasingly poses a major threat throughout the world. The 2024 Cloud Security Report from Check Point software and Cybersecurity Insiders, based on conversations with 800 cloud and cybersecurity professionals, found that 96% of respondents were concerned about their capacity to manage cloud security risks, with 39% describing themselves as very concerned. For sectors such as energy, utilities, and pharmaceuticals, security is a top priority for obvious reasons.Another reason for considering the edge deployment for an AI implementation is cost. If you have a user base that is likely to grow substantially, operational expenditure may increase significantly in a cloud model. It may do so even more if the AI solution also requires the regular transfer of large amounts of data, such as video imagery. In these cases, a cloud-based approach may not be financially sustainable in the long term. Developments at the EdgeWhile edge will never be able to compete with the cloud in terms of sheer processing power, a new class of system-on-chip (SoC) processors has emerged, which is designed for AI inference. Many of the vendors in this space have also designed chipsets that are dedicated to specific use cases that allow further cost optimization. Related:Some specific examples of these new products are Intels platforms to support computer vision edge deployments, Qualcomms improved chips for mobile and wearable devices, and Ambarella advancing whats possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.These are just some of the contributory factors in the growth of the global edge AI market. One market research company recently estimated that it would grow to $61.63 billion in 2028, from $24.48 billion in 2024. Taking AI to the EdgeSo how do you bring your AI project to the edge? The answer is carefully. Perhaps counter-intuitively, an edge AI project often should begin in the cloud. The initial development of edge AI inference usually requires a level of processing power that can only be found in a cloud environment. But once the development and training of the AI model is complete, the fully mature version can be deployed at the edge. The next step will be to consider how the data processing requirements can be kept to a minimum. The insatiable demand for computing power from the most capable AI models is widely known, but this applies to all scales of AI -- even smaller models at the edge. Therefore, at this point, a range of optimization techniques will be required to minimize the size of both processing power and required data inputs. Related:This will involve reviewing the specific use case and the capabilities of the selected SoC, along with all edge device components, such as cameras and sensors, that may be supplying the data. The process is likely to involve a sizeable degree of experimentation and adjustment to find the lowest acceptable level of decision-making accuracy that can be achieved without undue compromises in the quality of the solution. The AI model itself also needs to be iteratively optimized to enable inference at the edge. Achieving this almost certainly will involve several transformations, as the model goes through the processes of quantization and simplification. Businesses also need to address openness and extensibility factors to ensure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins, and the creation of a software development kit to ensure smooth deployments. Finally, AI solutions are progressing at an unprecedented rate, with better models being released all the time. So, there needs to be a reliable method for quickly updating the ML models at the core of an edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Tools and techniques developed for and used in traditional DevOps, such as containerization, can be applied to maintain competitive advantage.Given the speed of AI development, most organizations will soon be considering its adoption in one form or another. With edge technology advancing rapidly as well, businesses need to seriously consider the benefits this can provide before they invest. About the AuthorPrzemysaw KrokoszEdge and Embedded Technology Solutions Expert,MobicaPrzemysaw Krokosz is an edgeand embedded technology solutions expertatMobica. He works closely with some of the worldslargest and most prestigious organizations on innovative areas of tech development.See more from Przemysaw KrokoszNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Commentaires ·0 Parts ·10 Vue
  • TRALGO: Frontend Entwickler (m/w/d) mit Fokus auf React|SolidJS - Freelancer-Basis
    weworkremotely.com
    Du liebst Herausforderungen und mchtest Teil eines innovativen Teams sein? Wie wre es, wenn du dich unserem groartigen internationalen Team anschliet und deine Ideen in spannenden Projekten verwirklichst?Wir bieten dir die Chance, remote zu arbeiten oder dich nach erfolgreicher Zusammenarbeit in unserem modernen Headquarter in Dubai weiterzuentwickeln. Arbeite mit den Besten, gestalte die Zukunft der Finanztechnologie und bring deine Fhigkeiten auf das nchste Level. Gemeinsam schaffen wir Innovationen, nehmen Herausforderungen an und entwickeln Lsungen, die einen Unterschied machen.Klingt das nach deinem perfekten Arbeitsumfeld? Dann lass uns herausfinden, ob wir zusammenpassen!Aufgaben Entwicklung und Optimierung von Finanztechnologien: Du wirst unsere bestehenden Systeme und Anwendungen im Bereich Brse und Trading weiterentwickeln und optimieren. Backend- und Frontend-Entwicklung: Je nach Spezialisierung wirst du an der Weiterentwicklung unserer Trading-Plattformen und Analyse-Tools beteiligt sein. Automatisierung von Prozessen: Entwicklung von Automatisierungen fr Datenverarbeitung und Analyse, um effizientere Arbeitsablufe zu ermglichen. Arbeiten mit APIs: Integriere externe Datenquellen (Brsendaten, Finanzmarktinformationen) und entwickle APIs fr den effizienten Datenaustausch. Problemanalyse und Debugging: Behebe technische Fehler und stelle sicher, dass alle Systeme fehlerfrei laufen.Qualifikation Erfahrung in der Softwareentwicklung: Du hast mindestens 2-3+ Jahre Berufserfahrung in der Entwicklung von Software, idealerweise im Bereich Fintech, Brse oder Trading. Technische Fhigkeiten: Du beherrschst gngige Programmiersprachen und Frameworks wie Python, Java, C++, JavaScript, Go, SolidJS, React oder hnliche Technologien. Kenntnisse in Datenbanken: Du hast Erfahrung in der Arbeit mit Datenbanken (SQL, NoSQL) und bist in der Lage, groe Datenmengen effizient zu verwalten. Erfahrung mit APIs und Automatisierung: Du hast bereits APIs entwickelt oder mit ihnen gearbeitet und kennst dich mit der Automatisierung von Prozessen aus. Kenntnis der Finanzwelt/Trading: Ein Interesse und Verstndnis fr Brse, Finanzmrkte und Trading-Prozesse sind von Vorteil. Eigenstndige Arbeitsweise: Du arbeitest selbststndig und bist in der Lage, Projekte zuverlssig und termingerecht umzusetzen. Sprache: Wir sind ein deutschsprachiges Team, deshalb ist es uns wichtig, dass auch du deutsch sprichstBenefits Dynamisches Arbeitsumfeld: Arbeite in einem ambitionierten Team mit flachen Hierarchien. Weiterentwicklungsmglichkeiten: Du wirst in spannende Projekte eingebunden, die deine Fhigkeiten fordern und erweitern. Remote-Arbeit mglich: Flexible Arbeitszeiten und die Mglichkeit, remote zu arbeiten. Zugang zur Finanzwelt: Arbeite direkt an der Schnittstelle von Technologie und Finanzmrkten. Related Jobs See more Full-Stack Programming jobs
    0 Commentaires ·0 Parts ·9 Vue
  • Useful quantum computing is inevitableand increasingly imminent
    www.technologyreview.com
    On January 8, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. However, history shows that brilliant people are not immune to making mistakes. Huangs predictions miss the mark, both on the timeline for useful quantum computing and on the role his companys technology will play in that future. Ive been closely following developments in quantum computing as an investor, and its clear to me that it is rapidly converging on utility. Last year, Googles Willow device demonstrated that there is a promising pathway to scaling up to bigger and bigger computers. It showed that errors can be reduced exponentially as the number of quantum bits, or qubits, increases. It also ran a benchmark test in under five minutes that would take one of todays fastest supercomputers 10 septillion years. While too small to be commercially useful with known algorithms, Willow shows that quantum supremacy (executing a task that is effectively impossible for any classical computer to handle in a reasonable amount of time) and fault tolerance (correcting errors faster than they are made) are achievable. For example, PsiQuantum, a startup my company is invested in, is set to break ground on two quantum computers that will enter commercial service before the end of this decade. The plan is for each one to be 10 thousand times the size of Willow, big enough to tackle important questions about materials, drugs, and the quantum aspects of nature. These computers will not use GPUs to implement error correction. Rather, they will have custom hardware, operating at speeds that would be impossible with Nvidia hardware. At the same time, quantum algorithms are improving far faster than hardware. A recent collaboration between the pharmaceutical giant Boehringer Ingelheim and PsiQuantum demonstrated a more than 200x improvement in algorithms to simulate important drugs and materials. Phasecraft, another company we have invested in, has improved the simulation performance for a wide variety of crystal materials and has published a quantum-enhanced version of a widely used Advances like these lead me to believe that useful quantum computing is inevitable and increasingly imminent. And thats good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve. We should care about the prospect of useful quantum computers because today we don't really know how to do chemistry. We lack knowledge about the mechanisms of action for many of our most important drugs. The catalysts that drive our industries are generally poorly understood, require expensive exotic materials, or both. Despite appearances, we have significant gaps in our agency over the physical world; our achievements belie the fact that we are, in many ways, stumbling around in the dark. Nature operates on the principles of quantum mechanics. Our classical computational methods fail to accurately capture the quantum nature of reality, even though much of our high-performance computing resources are dedicated to this pursuit. Despite all the intellectual and financial capital expended, we still dont understand why the painkiller acetaminophen works, how type-II superconductors function, or why a simple crystal of iron and nitrogen can produce a magnet with such incredible field strength. We search for compounds in Amazonian tree bark to cure cancer and other maladies, manually rummaging through a pitifully small subset of a design space encompassing 1060 small molecules. Its more than a little embarrassing. We do, however, have some tools to work with. In industry, density functional theory (DFT) is the workhorse of computational chemistry and materials modeling, widely used to investigate the electronic structure of many-body systemssuch as atoms, molecules, and solids. When DFT is applied to systems where electron-electron correlations are weak, it produces reasonable results. But it fails entirely on a broad class of interesting problems. Take, for example, the buzz in the summer of 2023 around the room-temperature superconductor LK-99. Many accomplished chemists turned to DFT to try to characterize the material and determine whether it was, indeed, a superconductor. Results were, to put it politely, mixedso we abandoned our best computational methods, returning to mortar and pestle to try to make some of the stuff. Sadly, although LK-99 might have many novel characteristics, a room-temperature superconductor it isnt. Thats unfortunate, as such a material could revolutionize energy generation, transmission, and storage, not to mention magnetic confinement for fusion reactors, particle accelerators, and more. AI will certainly help with our understanding of materials, but it is no panacea. New AI techniques have emerged in the last few years, with some promising results. DeepMinds Graph Networks for Materials Exploration (GNoME), for example, found 380,000 new potentially stable materials. The fundamental issue is that an AI model is only as good as the data it's trained on. Training an LLM on the entire internet corpus, for instance, can yield a model that has a reasonable grasp of most human culture and can process language effectively. But if DFT fails for any non-trivially correlated quantum systems, how useful can a DFT-derived training set really be? We could also turn to synthesis and experimentation to create training data, but the number of physical samples we can realistically produce is minuscule relative to the vast design space, leaving a great deal of potential untapped. Only once we have reliable quantum simulations to produce sufficiently accurate training data will we be able to create AI models that answer quantum questions on classical hardware. And that means that we need quantum computers. They afford us the opportunity to shift from a world of discovery to a world of design. Todays iterative process of guessing, synthesizing, and testing materials is comically inadequate. In a few tantalizing cases, we have stumbled on materials, like superconductors, with near-magical properties. How many more might these new tools reveal in the coming years? We will eventually have machines with millions of qubits that, when used to simulate crystalline materials, open up a vast new design space. It will be like waking up one day and finding a million new elements with fascinating properties on the periodic table. Of course, building a million-qubit quantum computer is not for the faint of heart. Such machines will be the size of supercomputers, and require large amounts of capital, cryoplant, electricity, concrete, and steel. They also require silicon photonics components that perform well beyond anything in industry, error correction hardware that runs fast enough to chase photons, and single-photon detectors with unprecedented sensitivity. But after years of research and development, and more than a billion dollars of investment, the challenge is now moving from science and engineering to construction. It is impossible to fully predict how quantum computing will affect our world, but a thought exercise might offer a mental model of some of the possibilities. Imagine our world without metal. We could have wooden houses built with stone tools, agriculture, wooden plows, movable type, printing, poetry, and even thoughtfully edited science periodicals. But we would have no inkling of phenomena like electricity or electromagnetismno motors, generators, radio, MRI machines, silicon, or AI. We wouldnt miss them, as wed be oblivious to their existence. Today, we are living in a world without quantum materials, oblivious to the unrealized potential and abundance that lie just out of sight. With large-scale quantum computers on the horizon and advancements in quantum algorithms, we are poised to shift from discovery to design, entering an era of unprecedented dynamism in chemistry, materials science, and medicine. It will be a new age of mastery over the physical world. Peter Barrett is a general partner at Playground Global, which invests in early-stage deep-tech companies including several in quantum computing, quantum algorithms, and quantum sensing: PsiQuantum, Phasecraft, NVision, and Ideon.
    0 Commentaires ·0 Parts ·10 Vue
  • Russell-Cotes Art Gallery and Museum, Bournemouth
    www.architectsjournal.co.uk
    The team selected for the estimated 359,000 contract will deliver an upgrade of the landmark John Fogarty-designed Grade II*-listed building which holds 40,000 artworks and is located on the East Cliff overlooking Bournemouth town centre.The project, planned to complete in 2028, will refurbish and upgrade the historic complex which was originally constructed as a private house before becoming a public art gallery but has seen little investment since 1999 despite its exposed coastal location facing the sea.According to the brief: Russell-Cotes Art Gallery and Museum (RCAGM) is a Grade II*-listed building, located on the East Cliff of Bournemouth and housing an internationally important collection of 40,000 items of Victorian fine and decorative art and world cultures, much on open display.AdvertisementThis procurement is to obtain the professional services of a suitably qualified conservation accredited specialist (architect, chartered surveyor or chartered architectural technologist to lead a team of specialist consultants from RIBA Stage 4 (Technical Design) to Stage 6 (Handover).The successful Bidder will be expected to commence the works April 2025. The Successful Bidder will be required to ensure the works are completed by March 2028.The Russell-Cotes Art Gallery and Museum was originally constructed as a house and private gallery in 1894 and was gifted to the local council in 1908. The complex is subject to extreme climatic conditions and is need of repair.The latest procurement comes two years after Burrell Foley Fischer won a competition for a landmark new 3 million-to-3.5 million beach pavilion in Sandbanks nearby.Bids for the latest commission will be evaluated 60 per cent on quality and 40 per cent on price. Applicants must hold employers liability insurance of 5 million, public liability insurance of 5 million and professional indemnity insurance of 5 million.AdvertisementCompetition detailsProject title Design Lead for Russell-Cotes Art Gallery and Museum ConservationClientContract value 359,000First round deadline 2pm, 28 February 2025Restrictions TbcMore information https://www.find-tender.service.gov.uk/Notice/002412-2025
    0 Commentaires ·0 Parts ·13 Vue
  • Foster + Partners Fulham Gas Works residential towers plans approved
    www.architectsjournal.co.uk
    The scheme for St William, a Berkeley company, is the fourth phase of the AJ100 practices Kings Road Park masterplan. Consent was recently handed down by the London Borough of Hammersmith & Fulham for development through delegated powers.The Fosters scheme will see the eastern part of the site, a former gasworks located just south of Chelsea Football Clubs home stadium at Stamford Bridge, redeveloped.The plans feature a seven-storey podium building, a 28-storey tower and a 38-storey tower, together providing 357 private homes and amenities. In addition, nearly two acres of new parkland and public open space will be created.AdvertisementThe consented design includes one less tower than outline proposals first drawn up by Fosters for the Kings Road Park masterplan with single staircases, which were approved in February 2019 before new fire safety regulations were introduced. The previous vision also featured smaller floorplates.Fosters said the updated design reduces the developments embodied carbon compared with the outline planning application, while meeting new fire safety requirements. It also increases daylight entering the new park by 59 per cent.Giles Robinson, senior partner at Foster + Partners, said: The scheme will provide the highest-quality homes that overlook one of Londons most spectacular new public parks.Our design complements the historic urban surroundings and enhances connections with nature by significantly increasing the amount of green space at the base of the towers and extending the experience of the park onto the podiums rooftop.The wider Kings Road Park masterplan proposes 1,800 new homes, both private and affordable, on the site of the former gasworks off Imperial Road. The scheme also includes a new park and a restored Grade II*-listed gasholder.AdvertisementAn earlier vision for the masterplan and fourth phase was drawn up by Apt, also for St William, and was approved in 2018 before Fosters took over the job.Phase 1, including 345 homes, was designed by EPR Architects and is nearing completion.The timeline for completion of the works is unknown.
    0 Commentaires ·0 Parts ·14 Vue