0 Commenti
0 condivisioni
174 Views
Elenco
Elenco
-
Effettua l'accesso per mettere mi piace, condividere e commentare!
-
WWW.WSJ.COMDeepSeek Wont Sink U.S. AI TitansPanic fueling the selloff of Nvidia, Broadcom and other tech giants is overblown.0 Commenti 0 condivisioni 187 Views
-
ARSTECHNICA.COMMazda celebrates 35 years of the MX-5 with anniversary modelnow I feel old Mazda celebrates 35 years of the MX-5 with anniversary model The MX-5 is the perfect antidote to all those big SUVs. Jonathan M. Gitlin Jan 27, 2025 12:19 pm | 29 Credit: Mazda Credit: Mazda Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreDAYTONA, FloridaThis might make you feel old, but the Mazda MX-5 Miata just turned 35. Still in its fourth generationbetter known to Miata nerds as the "ND"this small, affordable, lightweight sports car is the perfect antidote to, well, just about everything else on the roads. And to celebrate this latest milestone, Mazda has created a new special anniversary edition, which it unveiled at this past weekend's Rolex 24 at Daytona.When the Miata debuted in 1989, it was something of a game-changer. Inspired by classic European roadsters like those built by MG and Alfa Romeo, it was small, lithe, and, most importantly, reliable. It didn't hurt that it looked nice and was great to drive.It's also been something of a hit among amateur racersMazda is proud that each weekend, more Miatas are on track than any other make of car. That goes some way to explaining why Mazda chose this year's Rolex 24 at Daytona to reveal the new 35th Anniversary Editionthe MX-5 Cup series is probably IMSA's most exciting support series.The 35th Anniversary Edition is the latest in a long line of special edition Miatas, including anniversary cars for the 10th, 20th, 25th, and 30th editions. The focus here was on "classic elegance," with Artisan Red paint that's almost burgundy, plus a tan Nappa leather interior that will remind some of the tan leather interiors that Mazda used on some NAs. IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda The 35th Anniversary Edition is similar to the Grand Touring trim, which means features like heated seats, and Mazda says it has added a limited-slip differential, additional bracing, and some newly tuned Bilstein dampers. There's also a beige convertible roof and some shiny 17-inch alloy wheels.It's also a bit more expensive than other Miatas, with an MSRP of $36,250. That's $1,620 more expensive than the next-most-expensive six-speed Miata (the Grand Touring), but it does come with the aforementioned extra equipment. Getting a hold of one might be a bit tricky, thoughMazda will only import 300 into the US.Jonathan M. GitlinAutomotive EditorJonathan M. GitlinAutomotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 29 Comments0 Commenti 0 condivisioni 182 Views
-
ARSTECHNICA.COMDeepSeek panic triggers tech stock sell-off as Chinese AI tops App Storewar of the weights DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store A new Chinese AI app is sparking existential panic in American AI companies and investors. Benj Edwards Jan 27, 2025 11:29 am | 65 Credit: Luis Diaz Devesa via Getty Images Credit: Luis Diaz Devesa via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Monday morning, Nvidia stock dove 11 percent amid worries over the rise of Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI assistant app jumped to the top of the iPhone App Store's "Free Apps" category over the weekend, overtaking ChatGPT.Whats the big deal about DeepSeek?The drama started around January 20 when Chinese AI startup DeepSeek announced R1, a new simulated reasoning (SR) model that it claimed could match OpenAI's o1 in reasoning benchmarks. Like o1, R1 is trained to work through a simulated chain of thought process before providing an answer, which can potentially improve the accuracy or usefulness of the AI models' outputs for some types of questions posed by the user.That first part wasn't too surprising since other AI companies like Google are hot on the heels of OpenAI with their own simulated reasoning models. In addition, OpenAI itself has announced an upcoming SR model (dubbed "o3") that can surpass o1 in performance.There are three elements of DeepSeek R1 that really shocked experts. First, the Chinese startup appears to have trained the model for only $6 million as a so-called "side project" while using less powerful Nvidia H800 AI-acceleration chips due to US export restrictions on cutting-edge GPUs. Secondly, it appeared just four months after OpenAI announced o1 in September 2024. Finally, and perhaps most importantly, DeepSeek released the model weights for free with an open MIT license, meaning anyone can download it, run it, and fine-tune (modify) it.It suddenly seemed to many observers on social media that American tech companies like OpenAI and Googlewhich have so far thrived on proprietary, closed modelshave "no moat," as tech insiders often say, which means that those companies' technological lead, access to cutting-edge hardware, or impressive bankrolls do not necessarily protect them from upstart market challengers.On Friday, venture capitalist Marc Andreessen wrote on X that DeepSeek R1 is "one of the most amazing and impressive breakthroughs I've ever seen" and a "profound gift to the world." The endorsement from the Andreessen Horowitz cofounder added fuel to the growing buzz around DeepSeek.On top of that, over the weekend, DeepSeek's app, which allows users to experiment with both the R1 model and the company's V3 conventional large language model (LLM) for free, shot to the top of the US iPhone App Store. Multiple AI-related Reddit threads have suddenly been plastered with DeepSeek-related posts, leading to so-far unfounded accusations that someone in China is astroturfingpretending to be ordinary users but actually posting with an agenda to support somethingto artificially drum up support for the Chinese AI company.Over the past weekend, social media has been overtaken with a sort of "sky is falling" in AI mentality, coupled with geopolitical angst about US economic rival China catching up with America, which perhaps inspired a measure of panic in big tech investors and led to the Nvidia stock sell-off, despite the fact that DeepSeek used Nvidia chips for training.As tempting as it is to frame this as a geopolitical tech battle, the "US versus China" framing has been overblown, according to some experts. On LinkedIn, Meta Chief AI Scientist Yann LeCun, who frequently champions open-weights AI models and open source AI research, wrote, "To people who see the performance of DeepSeek and think: 'China is surpassing the US in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones.'"But is DeepSeek R1 any good?From the start, DeepSeek has claimed that R1 can match OpenAI's o1 model in AI benchmarks, but benchmarks have historically been easy to game and do not necessarily tell you much about how the models might be used in everyday scenarios.Over the past week, we have experimented with both DeepSeek-V3 (which is roughly the counterpart to OpenAI's GPT-4o), and DeepSeek-R1, and from informal testing, they both seem to be roughly equivalent to OpenAI's ChatGPT models, although that can vary dramatically based on how they are used and prompted. DeepSeek's AI assistant, which you can try at chat.deepseek.com, can even search the web like ChatGPT. We will likely evaluate R1 more formally in a future article.Ultimately, a cheaply trained open weights AI model that can match America's best commercial models is genuinely a threat to closed-source AI companies, but it should not be a surprise to anyone who has been watching the rapid rate of progress in AI. The history of computing is replete with examples of information technology getting cheaper and smaller, becoming a commodity, and eventually being absorbed as a component into larger products.Many software components of modern operating systems (including built-in apps, features, codecs, and utilities) were once separate products that retailed for thousands of dollars when they were first invented. Microprocessors supplanted massive, expensive computer systems and eventually became embedded into everything. We suspect that AI models and software that processes data with simulated reasoningeven hypothetical human-level AI or beyond (if it is ever achieved)will be no different. Tech companies come and go, the next new thing is created, and the cycle repeats itself.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 65 Comments0 Commenti 0 condivisioni 166 Views
-
WWW.NEWSCIENTIST.COMEuropean cities face millions more deaths from extreme temperaturesTourists try to cool off in Rome, where a large increase in heat deaths is expected by 2099Massimo Valicchia/NurPhoto via Getty ImagesThere will be an extra 2.3 million temperature-related deaths in Europes main cities by 2099 without more action to limit warming and adapt to it, researchers predict. However, in cities in colder northern countries such as the UK, there will be fewer temperature-related deaths over this period, because the decline in deaths from cold will be greater than the increase in deaths from heat.We estimate a slight net decrease, but its very small compared to the big increase we could see in the Mediterranean region, says Pierre Masselot at the London School of Hygiene & Tropical Medicine. AdvertisementMasselots team started by looking at epidemiological studies on how deaths increase during periods of extreme heat or extreme cold. His team then used these statistical links to estimate how the number of excess deaths would change over the next century in various warming scenarios.The study looks at 850 cities home to 40 per cent of Europes population but not any rural areas. This is because the statistical links are stronger where lots of people live in a small area and are exposed to roughly the same conditions.If cities dont adapt, the net effect of climate change increases exponentially with greater warming. In a scenario similar to our current course, the number of excess deaths related to temperature would increase by 50 per cent, from 91 per 100,000 people per year in recent years to 136 per 100,000 people per year by 2099. Unmissable news about our planet delivered straight to your inbox every month.Sign up to newsletterAdaptive measures such as the wider use of air conditioning and planting more trees in inner cities would bring these numbers down, says Masselot, but to significantly reduce a populations vulnerability to heat requires substantial adaptive measures. This is much more than what we have already observed in many countries across the world.The team's estimates are based on the average daily temperatures in warming scenarios, and they don't include the possibility of much more extreme heatwaves. "We have found that usually this is good enough to be able to relate deaths to temperature," says Masselot.This is the most comprehensive study of its kind so far, he says. It includes more countries and suggests for the first time that even France and Germany will have more temperature-related deaths as the continent warms.Rising temperatures will have a wide range of effects on people, from their health to their productivity, he says. "Mortality is just one part of the story."Journal reference:Nature Medicine DOI: 10.1038/s41591-024-03452-2Topics:climate change0 Commenti 0 condivisioni 155 Views
-
WWW.NEWSCIENTIST.COMThe psychologist exposing the mental gymnastics that conceal racismSocietyDespite widespread studies revealing the prevalence of racism, its impact is often overlooked. But there are ways to tackle hidden biases and systemic discrimination, says Keon West 27 January 2025 Becki GillKeon Westcould reel off anecdotes about the everyday racism he experiences but he wont. Personal accounts rarely convince anyone, he says, and, all too often, they are dismissed or put down to some other, less offensive, cause. Instead of the feelings that racist behaviour and accusations of racism provoke, he prefers to focus on facts.A social psychologist at Goldsmiths, University of London, West has consolidated hundreds of rigorous empirical studies on racism conducted over decades in his new book, The Science of Racism. By exploring how experiments can detect racism and measure its impact across societies, he builds a scientifically accurate picture of what contemporary racism is and the complexities that surround it.While it is clear that societys attempts to combat racism remain inadequate, there is plenty that can be done about it. The same studies that prove the existence of racism can also help us unpack the psychological gymnastics that nearly everyone performs to conceal their racist behaviours from themself. The idea is that, by becoming aware of these personal biases, many racist behaviours can gradually be dissolved.In this interview, West sheds light on ideas like reverse racism and systemic racism and lays out the science-backed methods of spotting racism in its various guises. Doing so, he hopes, will steer public discourse away from debating whether racism exists to confronting it head on.Amarachi Orie: What is racism?Keon West: There are two definitions that I think are useful. Theres one thats useful for running the scientific experiments: racism is any0 Commenti 0 condivisioni 177 Views
-
WWW.BUSINESSINSIDER.COMDeepSeek just dropped an update AI model called Janus Pro. It says the image generator is better than OpenAI's DALL-E.DeepSeek releases updated AI model Janus-Pro, outperforming rivals in benchmarks.The latest versions of Janus, a text-to-image generator, came out after DeepSeek's R1 last week.DeepSeek said R1 showed advanced reasoning and was trained at a fraction of the cost of US rivals.DeepSeek just released an updated open-source multimodal AI model.The Chinese startup on Monday shared a research paper and released updated versions of the model, called Janus-Pro-1B, and Janus-Pro-7B.According to its paper, DeepSeek says Janus-Pro outperforms OpenAI's DALL-E 3 and Stable Diffusion's 3 Medium text-to-image generators across multiple benchmarks.The latest release comes after DeepSeek unveiled its model R1 last week, which showed new "reasoning" capabilities.R1's release sent shockwaves across the tech industry and spooked investors. That's because DeepSeek demonstrated a breakthrough AI model that could outperform US rivals, but at what it says is just a fraction of the cost.DeepSeek is pricing access to its models way under what OpenAI charges, potentially undermining AI business models and assumptions across the sector. Contact the reporter Jyoti Mann via email at or via Signal at jyotimann.11. Reach out via a nonwork device.0 Commenti 0 condivisioni 151 Views
-
WWW.BUSINESSINSIDER.COMDeepSeek's cheaper models and weaker chips call into question trillions in AI infrastructure spendingChina's DeepSeek model challenges US AI firms with cost-effective, efficient performance.DeepSeek's model is 20-40 times cheaper than OpenAI's, using modest hardware.DeepSeek's efficiency raises questions about US investments in AI infrastructure.The bombshell that is China's DeepSeek model has set the AI ecosystem alight.The models are high-performing, relatively cheap, and compute-efficient, which has led many to posit that they pose an existential threat to American companies like OpenAI and Meta and the trillions of dollars going into building, improving, and scaling US AI infrastructure.The price of DeepSeek's open-source model is competitive 20 to 40 times cheaperBut the potentially more nerve-racking element in the DeepSeek equation for US-built models is the relatively modest hardware stack used to build them.The DeepSeek-V3 model, which is most comparable to OpenAI's ChatGPT, was trained on a cluster of 2,048 Nvidia H800 GPUs, according to the technical report published by the company.H800s are the first version of the company's defeatured chip for the Chinese market. After the regulations were amended, the company made another defeatured chip, the H20 to comply with the changes.Though this may not always be the case, the chip is the most substantial cost in the large language model training equation. Being forced to use less-powerful, cheaper chips, creates a constraint that the DeepSeek team has ostensibly overcome."Innovation under constraints takes genius," said Sri Ambati, CEO of open-source AI platform H2O.ai told Business Insider.Even on subpar hardware, training DeepSeek-V3 took less than two months, according to the report.The efficiency advantageDeepSeek-V3Its smaller size comes in part from different architecture to ChatGPT called a "mixture of experts." The model has pockets of expertise built-in, which go into action when called upon and sit dormant when irrelevant to the query. This type of model is growing in popularity and DeepSeek's advantage is that it built an extremely efficient version of an inherently efficient architecture."Someone made this analogy: It's almost as if someone released a $20 iPhone," said Foundry CEO Jared Quincy Davis told BI.The Chinese model used a fraction of the time, a fraction of the number of chips, and a less-capable, less expensive chip cluster. Essentially, it's a drastically cheaper, competitively capable model that the firm is virtually giving away for free.The model that is even more concerning from a competitive perspective, according to Bernstein is DeepSeek-R1, which is a reasoning model and more comparable to OpenAI's o1 or o3. This model uses reasoning techniques to interrogate its own responses and thinking. The result is competitive with OpenAI's latest reasoning models.R1 was built on top of V3 and the research paper released alongside the more advanced model doesn't include information about the hardware stack behind it. But, DeepSeek used strategies like generating its own training data to train R1, which requires more compute than using data scraped for the internet or generated by humans.This technique is often referred to as "distillation" and is becoming a standard practice, Ambati said.Distillation brings with it another layer of controversy, though. A company using its own models to distill a smarter, smaller model is one thing. But the legality of using other company's models to distill new ones depends on licensing.Still, DeepSeek's techniques are more iterative and likely to be taken up by the AI undsutry immediately.For years, model developers and startups have focused on smaller models since their size makes them cheaper to build and operate. The thinking was that small models would serve specific tasks. But what DeepSeek and potentially OpenAI's o3 mini demonstrate is that small models can also be generalists.It's not game overA coalition of players including Oracle and OpenAI, with cooperation from the White House, announced Stargate, a $500 billion data center project in Texas the latest in a long, quick procession of a large-scale conversion to accelerated computing. The shock from DeepSeek has called that investment into question, and the largest beneficiary Nvidia, is on a roller coaster as a result. The company's stock plummeted more than 13% Monday.But Bernstein said the response is out of step with the reality."DeepSeek DID NOT 'build OpenAI for $5M'," writes Bernstein analysts in a Monday investor note. The panic, especially on "X" is blown out of proportion, the analysts wrote.DeepSeek's own research paper on V3 explains: "the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data." So the $5 million figure is only mart of the equation."The models look fantastic but we don't think they are miracles," Bernstein continued. Last week China also announced a roughly $140 billion investment in data centers, in a sign that infrastructure is still needed despite DeepSeek's achievements.The competition for model supremacy is fierce, and OpenAI's moat may indeed be in question. But demand for chips shows no signs of slowing, according to Bernstein. Tech leaders are circling back to a centuries-old economic adage to explain the moment.Jevon's paradox is the idea that innovation begets demand. As technology gets cheaper or more efficient, demand increases much faster than prices drop. That's what providers of computing power like Davis, have been espousing for years. This week, Bernstein and Microsoft CEO Satya Nadella picked up the mantle too."Jevon's paradox strikes again!" Nadella posted on X Monday morning. "As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of," he continued.0 Commenti 0 condivisioni 176 Views
-
WWW.DAILYSTAR.CO.UKHere's why GTA 6 fans think Trailer 2 coming this week and it's all because of 'numbers'GTA 6 fans are convinced this week is finally the week we'll see the game's long-awaited second trailer, with Rockstar Games expected to stick to a surprising reveal patternTech16:00, 27 Jan 2025Updated 16:20, 27 Jan 2025GTA 6 could hit its 2025 date, or it could be delayedThe GTA 6 community is going wild for a second trailer this month, and as we've covered already, they think it'll drop this week on Thursday, January 30, to be precise.Despite being the game many think could 'save' the video game industry, Rockstar Games hasn't said anything in well over a year, but with an earnings call on the horizon for parent company Take-Two Interactive, there's every chance the latest 'number theory' is correct.Here's why the GTA 6 community thinks the second trailer is coming on Thursday:Content cannot be displayed without consentAs shared by GTA 6+ on X (formerly Twitter), the reveal of a new GTA Online alongside numbers on a shipping container is worth a closer look.All of the numbers added together make 30, and the number 2 is in a larger font, meaning some fans are expecting the second trailer on January 30.We admit, it's not entirely believable right off the bat, but there's more. As the account points out, both GTA 5 and Red Dead Redemption 2's release dates were on Thursdays and, as luck would have it, January 30 is a Thursday.We've only got a few days to go to find out if Rockstar really is planning something for January 30, or whether it'll be just another GTA Online update on the day instead, but it's far from the most surprising theory we've had so far.In recent months fan speculation has ranged from plotting the phases of the moon in GTA Online, to a PlayStation event that didn't materialise, analysing social media posts and a mystery YouTube playlist.Thankfully, there's a very cheeky parody game you can play sooner called Grand Taking Ages, which might be for the best if Donald Trump does end up delaying, cancelling, or censoring GTA 6.Article continues belowFor the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.RECOMMENDED0 Commenti 0 condivisioni 162 Views
-
METRO.CO.UKOn Mass Effect 2s anniversary, what do we know about the next game?On Mass Effect 2s anniversary, what do we know about the next game?Adam StarkeyPublished January 27, 2025 6:06pmUpdated January 27, 2025 6:07pm Liara is the only character weve seen so far (BioWare)As Mass Effect 2 celebrates its 15th anniversary, we look into why the new game is taking so long and whether it will ever be out.BioWares history is built on a wealth of critically-acclaimed role-playing games, including Baldurs Gate 1 and 2 and Star Wars: Knights Of The Old Republic, but for many its magnum opus is the original Mass Effect trilogy.The sci-fi series debuted in 2007 (it was an Xbox 360 exclusive at first) but it was its sequel which elevated the series to super stardom. The third entry isnt quite so revered, thanks to its controversial ending, but the subsequent quality drop with fourth entry Mass Effect: Andromeda helped make Mass Effect 3s faults look marginal by comparison.As the series high point, Mass Effect 2 celebrates its 15th anniversary this week (it was first released in the US on January 26, but it didnt arrive in Europe until January 29), and with a fifth entry in the works, weve assembled every nugget of information out there on what will hopefully be the Normandy crews big comeback.When was Mass Effect 4 announced?After years of rumours, BioWare announced a new Mass Effect game on November 7 (aka N7 Day) in 2020, with a piece of concept art. At the time, the studios then-vice president Casey Hudson said the team was in early stages of the project.A veteran team has been hard at work envisioning the next chapter of the Mass Effect universe, Hudson said. We are in early stages on the project and cant say any more just yet, but were looking forward to sharing our vision for where well be going next.A month later, the team debuted a short trailer at The Game Awards which shows original trilogy character Liara TSoni picking up some N7 armour. The tagline at the end of the trailer promises Mass Effect will continue.What have we heard about Mass Effect 4 since?In April 2022, BioWare shared a blog post where it said the next Mass Effect is now early in development. The studio implied it would still be a few years away yet, telling players that its going to be a while before we can talk about it in more detail.The studio also released a creepy audio teaser later in 2022, where Liara is heard having a conversation with the Geth. In the following year, BioWare clarified that it was still in pre-production with a core team of veteran storytellers, with no indication of when well see anything substantial, as it moved its focus towards Dragon Age: The Veilguard.For N7 Day in 2023, the studio shared more concept art and a brief 35 second teaser showing a female character walking down a hallway in an N7 jacket. Again, it was all very vague on actual details, with the assumption that the game is still at least a few years away.Mass Effect franchise director Mike Gamble has commented a few times on the upcoming sequel too, stating it will still have a photorealistic look compared to Dragon Age: The Veilguard, and will maintain the mature tone of the original trilogy a reference to Veilguards more cartoonish approach, which was not popular with fans.In the years since Mass Effect 4 (most fans would prefer to forget Andromeda) was announced, several senior staff members have left BioWare, including boss Casey Hudson, Dragon Age executive producer Mark Darrah, and Mass Effect lead writer Mac Walters.In January 2025, Dragon Age: The Veilguard director Corinne Busche also departed the studio after being presented with an opportunity I couldnt turn down.Recently, Darrah claimed in a video on his YouTube channel that the next Mass Effect isnt ready to suddenly have a team of 250/300 people working on it following the launch of Dragon Age: The Veilguard.He added: In the past when BioWare was toying with being on just one project, like on Anthem or The Veilguard, that project was up and running at full speed so it was able to suck in every available resource, it had enough existing infrastructure that it was able to absorb everything.Thats not exactly whats happening [with Mass Effect 4]. You see this when you go on to peoples social media profiles. People who worked on The Veilguard, some of them are going onto Mass Effect, but some of them are moving into other parts of the EA organisation because Mass Effect isnt ready for them.Are there any rumours about Mass Effect 4s release date?While nothing has been officially announced, insider Jeff Grubb claimed in 2023 that the game is just nowhere near coming out suggesting it might not be released until 2029.More TrendingConsidering weve had no confirmation that it has moved beyond pre-production, theres a chance it might be even further away. The next hope for an update will likely be Summer Game Fest later this year, or the annual N7 day on November 7, 2025. So even news about the next game is likely to be a long way off Who is this mysterious figure? (BioWare)Emailgamecentral@metro.co.uk, leave a comment below,follow us on Twitter, andsign-up to our newsletter.To submit Inbox letters and Readers Features more easily, without the need to send an email, just use ourSubmit Stuff page here.For more stories like this,check our Gaming page.GameCentralSign up for exclusive analysis, latest releases, and bonus community content.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy0 Commenti 0 condivisioni 178 Views