• WWW.WIRED.COM
    Hulu Promo Codes and Discounts: 90% Off for Select Users
    Select customers can get a Hulu plan for $0.99 per month for 12 months, just by signing up. Get more details on this and other great deals below.
    0 Comments 0 Shares 1 Views
  • WWW.WIRED.COM
    Wayfair Coupons: Up to 80% Off November 2024 | WIRED
    Get 10% off with Wayfair promo code, up to 80% off furniture, and more top coupons for November.
    0 Comments 0 Shares 1 Views
  • WWW.NYTIMES.COM
    Review: We Are Your Robots, Still Tuning Up
    In Ethan Liptons musings on A.I., Mozart has a place alongside humpback whales.
    0 Comments 0 Shares 1 Views
  • WWW.NYTIMES.COM
    Chinas Huawei Takes Aim at Apple With Latest Smartphone
    Last year, a chip breakthrough put Huawei on top of the Chinese smartphone market. Now it is rolling out its newest phone, the Mate 70 series.
    0 Comments 0 Shares 1 Views
  • WWW.MACWORLD.COM
    Macworld Podcast: Black Friday holiday shopping tips
    MacworldTis the season to go shopping! On this episode of the Macworld Podcast, weve got some tips and tricks to get the most out of the holiday shopping season! Find out how to make the most out of your purchases!This is episode 911 with Karen Haslam, David Price, and Roman Loyola.Listen to episode 911 on Apple PodcastsListen to episode 911 on SpotifyFind a dealCheck out these roundups for the best Apple deals:Apple Black Friday 2024 saleBest Black Friday 2024 Apple dealsBest Black Friday 2024 Mac DealsBest Black Friday 2024 MacBook dealsBest Black Friday 2024 AirPods dealsBest Black Friday 2024 Apple Watch dealsBest Black Friday 2024 iPad dealsBest Black Friday 2024 iPhone dealsBest Black Friday 2024 Mac monitor dealsBest Black Friday 2024 SSD and external hard drive dealsBest Black Friday 2024 Apple accessory dealsBest Black Friday deals on Mac Thunderbolt and USB-C docks and hubsSubscribe to the Macworld PodcastYou can subscribe to the Macworld Podcastor leave us a review!right here in the Podcasts app. The Macworld Podcast is also available on Spotify and on the Macworld Podcast YouTube channel. Or you can point your favorite podcast-savvy RSS reader at: https://feeds.megaphone.fm/macworldTo find previous episodes, visit Macworlds podcast page or our home on MegaPhone.Apple
    0 Comments 0 Shares 1 Views
  • WWW.MACWORLD.COM
    Black Friday came early! Give your gaming PC an internal makeover with Windows 11 Pro for $20
    MacworldWhile Apples Silicon is incredibly efficient, it hasnt reached the point where its speed surpasses that of a high-end gaming PC. That, and your Macs lack of game compatibility is likely why you use a PC to game. However, if you havent upgraded your devices operating system in a while, you might be doing a disservice to your gaming.Just update toWindows 11 Pro, which comes with DirectX 12 Ultimate for enhanced gaming and plenty of other innovations. Ahead of Black Friday, lifetime access is available for $19.97 (reg. $199).Whether youre a fan ofElden Ring: Shadow of the Erdtree orDragons Dogma 2, youll love Windows 11 Pros gaming enhancements. DirectX 12 Ultimate is one of many standout features and is designed toboost gaming graphics and maximize your PCs hardware so you enjoy a more immersive gaming experience.Once you outfit your PC with Windows 11 Pro, you might want to use it for more than gaming, especially since it comes with a new AI-powered assistant, Copilot. Its powered by a custom version of GPT-4 to assist with writing, image generation, studying, and can even change your systems settings for you.Security features likeTPM 2.0, Smart App Control, biometrics login, and other antivirus defenses are also included in this OS to keep your personal data and device protected from bad actors.Give your PC a gamer-friendly makeover with this early Black Friday deal: this$19.97 lifetime license for Windows 11 Pro, now availableuntil December 1 at 11:59 PM Pacific!Microsoft Windows 11 ProOnly $19.97 at MacworldStackSocial prices subject to change.
    0 Comments 0 Shares 1 Views
  • WWW.COMPUTERWORLD.COM
    For Microsoft, will Trumps antitrust and environmental views help or harm?
    I recently wrote abouthow President-elect Donald J. Trumps actions on AI might affect Microsoft. This week, Im focused on what his antitrust regulation and environmental plans and the biggest wildcard of all, his personal vendettas could do to the company.What Microsoft can expect from antitrust lawsuitsTrump believes that the less regulation on big business, the better. So you would expect him to put an end to antitrust suits against the tech industry. But thats not necessarily the case.Theres no doubt that Lina Khan, the head of the US Federal Trade Commission (FTC) who has aggressively pursued antitrust prosecutions against tech, will be let go after Trumps election. And many of Trumps advisers, notably venture capitalist Marc Andreessen, would like to see tech antitrust prosecutions to stop.However, some advisers close to Trump, including Vice President-elect JD Vance, want the administration to take on Big Tech mainlybecause they want to stop Meta and other social media companies from policing against misinformation, white supremacism, public-health health deceptions and election lies.Microsoft has largely been spared Khans prosecutions,even as the Biden administration has targeted Google, Apple, Meta, Amazon, and Apple. The one recent federal antitrust action against Microsoft by the FTC, for buying the gaming giant Activision, didnt go well for the feds. A judge let the purchase go through, although theFTC has since appealed the case.That might make you think that Microsoft is in the clear under Trump. ButThe Washington Postreportsthe FTC will be investigating Microsofts cloud business for anticompetitive practices. In addition, the FTC appeal of the Activision case still stands, so that case could be revived.Trump could demand that whomever he appoints to head the FTC drop those actions. Odds are, he wont, thanks to his main tech adviser, entrepreneur Elon Musk. His AI startup, xAI, competes directly with Microsoft, and isnow valued at $50 billion after investments this spring from Andreesen and others. Musk alsorecently amended an antitrust suit he filed against OpenAI, adding Microsoft as a defendant.Dont be surprised if the FTC under Trump not only follows through on Khans investigations of Microsoft, but also files an AI suit against the company, thanks to Musks influence.Trump, Microsoft, and climate changeTrump believes climate change is a hoax. Hes vowed to tear up environmental regulations and attack green energy. His campaign slogan, Drill, Baby, Drill, and his close friendship with the oil industrymake clear that hell do everything he can to increase reliance on fossil fuels and kill clean sources of electricity.He was alsoa booster of nuclear power during his first administration, though he wasnt quite as enthusiastic about it on the campaign trail. Even so, the stock market price of nuclear-power-related companies jumped the day after his election, and most people expect him to be a nuclear backer.What does this have to do with Microsoft? Plenty.Microsoft has vowed to make itself carbon-negative by 2030, and Trumps attack on green energy will make it more difficult for the company to find clean energy sources.Exacerbating Microsofts climate-change challenges is the fact that data centers that power AI require a tremendous amount of electricity.As Ive noted before, Microsoft might be abandoning its promises to fight climate change because of that. And the company could also pour billions into reviving nuclear energy with aproposed deal to reopen Three Mile Island, the site of the worst nuclear power disaster in US history.Given Trumps views about climate change and his support for AI, hell most likely do everything he can to give Microsoft and other AI companies all the electricity they want no matter the effect on the environment. And hell also likely let them go full speed ahead with nuclear power. In fact, Microsoft President Brad Smith recently said he expects Trump to cut environmental regulations to provide Microsoft with all the electricity it wants for its AI data centers.Gregory Allen, director of the Wadhwani AI Center at the Center for Strategic and International Studies he worked on AI at issues the Department of Defense during the Trump and Biden presidencies agrees. On a callhosted by The Information,he said Trump can invoke emergency powers and waive a lot of environmental regulations to allow people to build new nuclear and other electrical generation capacity in order to power the big data centers that folks want for these advanced AI models.He added that he expects that to happen pretty early in the Trump Administration.Trumps vendettas and grievancesThe president-elect is driven by vendettas and grievances more than he is by policy. And when it comes to tech, he has plenty of them.In the 2020 election, Meta founder Mark Zuckerberg and his wife started a foundation to ensure that everyone can vote and every vote can be counted. Since then,Trump threatened to investigate him and send him to jailif re-elected, saying, We are watching him closely, and if he does anything illegal this time, he will spend the rest of his life in prison.Zuckerberg got the message, offering accolades, saying after last summers assassination attempt, Seeing Donald Trump get up after getting shot in the face and pump his fist in the air with the American flag is one of the most badass things Ive ever seen in my life. On some level as an American, its like hard to not get kind of emotional about that spirit and that fight, and I think that thats why a lot of people like the guy.Then theres Amazon founder andWashington Postowner Jeff Bezos. When Trump was president, he frequently took aim at Amazon and Bezos because thePostpublished articles that angered Trump. He didnt just criticize and threaten him; Trump also yanked a multi-billion-dollar cloud contract with the Defense Department from Amazon.This time around, Bezos is doing Trumps bidding.He canceled the Posts planned endorsement of Vice President Kamala Harriseven though the newspaper has endorsed candidates for president for decades. After Trump was elected, Bezos praised him, writing on X, Big congratulations to our 45th and now 47th President on an extraordinary political comeback and decisive victory.Those are just two of tech titans who have praised Trump even though he had targeted them. Microsoft CEO Satya Nadella has so far managed to avoid getting on Trumps bad side. He hasnt gone out of his way to praise the president-elect, either,offering Trump only a pro forma congratulationafter the election.But with Musk as a Trump adviser, and what will likely be a big focus on AI in the new administration, its not clear whether Nadella will be able to stay out of Trumps crosshairs. Whats also not clear is how Nadella will react if Trump threatens him and how that might affect Microsofts financial future and its sense of itself as a moral company.
    0 Comments 0 Shares 1 Views
  • WWW.COMPUTERWORLD.COM
    The biggest IT threat? That seemingly innocuous web browser
    For decades, enterprises have allowed their workers to use whatever free browser they wanted to access the most sensitive files possible. CIOs believed that security software in the environment such as endpoint security apps or supposedly secure web gateways would deliver any needed protections.And until 2020, that view was somewhat valid. But when various pandemic-fueled changes hit the workplace, almost everything changed. But as extreme browser exposure became far more dangerous, the shift was so gradual that almost no one in IT noticed any danger. Those changes included massive numbers of new remote sites; skyrocketing shifts away from on-premises tools and apps to the cloud; and far more SaaS deployments.The browser issue here actually arises from two distinct problems: virtually no limits on which browser can be used and no protections at the enterprise level that sit atop those browsers.The first is the most bizarre.Somehow, IT permits any browser to be used in their sensitive environments. Can you imagine that being permitted for anything else? How many CIOs would tell workers they can use whichever VPN app they want, including free consumer-grade VPNs? Would an enterprise CIO be OK with someone in finance ignoring the corporate license for Excel and instead opting to put sensitive payroll details into a freeware spreadsheet found at a gaming site in China? Or maybe an employee could forego a company-paid Zoom account for discussions of that upcoming acquisition and use a freebie service no ones ever heard of?[Related: 10 tips for a secure browsing experience]IT typically maintains strict controls over all software that touches their privileged areas, but browsers are a security free-for-all?Lets delve briefly into the history. When graphical browsers first moved into the enterprise in large numbers (dont forget that the earliest browsers, such as Cello and Lynx, were pure text) around 1994, the goal was to make it as easy as possible for people to interact with the web. The internet at that point had been around for decades, but the web had only recently become popularized.The problem is that as environments became exponentially more complex and access to ultra-sensitive data soared, IT didnt stop to reconsider ancient browser policies.If IT admins were to choose one specific browser to mandate, controls would become light-years easier. They could even require users to access the latest version from IT, allowing for updates to be strictly maintained. Internal web pages could be designed for that browser, making it far more likely to deliver an identical experience for all users.I routinely run into secure areas where critical text (such as the next button) is offscreen. That means trying three or four browsers until one works. Imagine that problem disappearing simply by mandating one browser for all.That kind of corporate mandate brings up a few issues:Desktop vs. mobile. Some enterprises might need to consider standardizing on one browser for desktop and possibly a different browser for mobile.IT political issues. Some of the browsers with major market share are deeply integrated with one vendors environments, such as Google Chrome and Microsoft Edge. Depending on how your environments are integrated with different platforms, this could be an issue.Compliance. Some of the browser makers are more aggressive at pushing privacy and other data boundaries, especially when generative AI is involved. Standardizing on one of those might lead to corporate compliance issues, especially if you have a substantial presence in Western Europe, Australia or Canada.Geography. Beyond the compliance issues, there are language and other regional support issues to consider, especially if you have a major presence in Asia.That brings us to problem two. Browsers were never designed to be even a little bit secure in the early days and not much has changed today. Thats why IT needs to insist that something act as a secure layer between your environment and any browser even your hand-chosen favorite browser.Because the needs of every enterprise are different, theres no one-size-fits-all browser security solution. The browser security layer must play well with your existing systems and your particular compliance needs colored by geography and verticals are critical factors.The browser is the number one app that everyone is using. The browsers of today are much more powerful than the older versions, said Dor Zvi, CEO of security firm Red Access. They allow you to run Javascript, login and tokens and render HTML. The browser today is so powerful that it acts almost like an operating system.Zvi argues that there is a reason those browser capabilities are so dangerous.A lot of the attacks today can now happen entirely within the browser. It is happening inside the frame of the browser, which means it is not on the network side and not on the endpoint side. The browser now holds the cookies and tokens for all of your applications, he said. Lets say someone is trying to steal my Okta two-factor authentication. [The attacker] can run it by solely using the browser privileges and no one will ever know about it.Another problem with allowing any browser from around the world to access your systems involves browser extensions. In the same way Apple and Google cant adequately police their apps to detect and remove malicious ones, browser teams cant verify the legitimacy of extensions. A malicious browser often has unlimited access to everything the browser can do or see.Thats why standardizing on one browser is important; it allows IT to also rein in browser extensions.Its a lot to think about but preferably not right before bed.
    0 Comments 0 Shares 1 Views
  • WWW.TECHNOLOGYREVIEW.COM
    These AI Minecraft characters did weirdly human stuff all on their own
    Left to their own devices, an army of AI characters didnt just survive they thrived. They developed in-game jobs, shared memes, voted on tax reforms and even spread a religion.The experiment played out on the open-world gaming platform Minecraft, where up to 1000 software agents at a time used large language models (LLMs) to interact with one another. Given just a nudge through text prompting, they developed a remarkable range of personality traits, preferences and specialist roles, with no further inputs from their human creators.The work, from AI startup Altera, is part of a broader field that wants to use simulated agents to model how human groups would react to new economic policies or other interventions.But for Alteras founder, Robert Yang, who quit his position as an assistant professor in computational neuroscience at MIT to start the company, this demo is just the beginning. He sees it as an early step towards large-scale AI civilizations that can coexist and work alongside us in digital spaces. The true power of AI will be unlocked when we have actually truly autonomous agents that can collaborate at scale, says Yang.Yang was inspired by Stanford University researcher Joon Sung Park who, in 2023, found that surprisingly humanlike behaviors arose when a group of 25 autonomous AI agents was let loose to interact in a basic digital world.Once his paper was out, we started to work on it the next week, says Yang. I quit MIT six months after that.Yang wanted to take the idea to its extreme. We wanted to push the limit of what agents can do in groups autonomously.ALTERAAltera quickly raised more than $11m in funding from investors including A16Z and the former Google CEO Eric Schmidts emerging tech VC firm. Earlier this year Altera released its first demo: an AI-controlled character in Minecraft that plays alongside you.Alteras new experiment, Project Sid, uses simulated AI agents equipped with brains made up of multiple modules. Some modules are powered by LLMs and designed to specialize in certain tasks, such as reacting to other agents, speaking, or planning the agents next move.The team started small, testing groups of around 50 agents in Minecraft to observe their interactions. Over 12 in-game days (4 real-world hours) the agents began to exhibit some interesting emergent behavior. For example, some became very sociable and made many connections with other characters, while others appeared more introverted. The likability rating of each agent (measured by the agents themselves) changed over time as the interactions continued. The agents were able to track these social cues and react to them: in one case an AI chef tasked with distributing food to the hungry gave more to those who he felt valued him most.More humanlike behaviors emerged in a series of 30-agent simulations. Despite all the agents starting with the same personality and same overall goalto create an efficient village and protect the community against attacks from other in-game creaturesthey spontaneously developed specialized roles within the community, without any prompting. They diversified into roles such as builder, defender, trader, and explorer. Once an agent had started to specialize, its in-game actions began to reflect its new role. For example, an artist spent more time picking flowers, farmers gathered seeds and guards built more fences.We were surprised to see that if you put [in] the right kind of brain, they can have really emergent behavior, says Yang. Thats what we expect humans to have, but dont expect machines to have.Yangs team also tested whether agents could follow community-wide rules. They introduced a world with basic tax laws and allowed agents to vote for changes to the in-game taxation system. Agents prompted to be pro or anti tax were able to influence the behavior of other agents around them, enough that they would then vote to reduce or raise tax depending on who they had interacted with.The team scaled up, pushing the number of agents in each simulation to the maximum the Minecraft server could handle without glitching, up to 1000 at once in some cases. In one of Alteras 500-agent simulations, they watched how the agents spontaneously came up with and then spread cultural memes (such as a fondness for pranking, or an interest in eco-related issues) among their fellow agents. The team also seeded a small group of agents to try to spread the (parody) religion, Pastafarianism, around different towns and rural areas that made up the in-game world, and watched as these Pastafarian priests converted many of the agents they interacted with. The converts went on to spread Pastafarianism (the word of the Church of the Flying Spaghetti Monster) to nearby towns in the game world.The way the agents acted might seem eerily lifelike, but really all they are doing is regurgitating patterns the LLMshave learned from being trained on human-created data on the internet. The takeaway is that LLMs have a sophisticated enough model of human social dynamics [to] mirror these human behaviors, says Altera co-founder Andrew Ahn.ALTERAIn other words, the data makes them excellent mimics of human behavior, but they are in no way alive.But Yang has grander plans. Altera plans to expand into Roblox next, but Yang hopes to eventually move beyond game worlds altogether. Ultimately, his goal is a world in which humans dont just play alongside AI characters, but also interact with them in their day-to-day lives. His dream is to create a vast number of digital humans who actually care for us and will work with us to help us solve problems, as well as keep us entertained. We want to build agents that can really love humans (like dogs love humans, for example), he says.This viewpointthat AI could love usis pretty controversial in the field, with many experts arguing its not possible to recreate emotions in machines using current techniques. AI veteran Julian Togelius, for example, who runs games testing company Modl.ai, says he likes Alteras work, particularly because it lets us study human behavior in simulation. But could these simulated agents ever learn to care for us, love us, or become self-aware? Togelius doesnt think so. There is no reason to believe a neural network running on a GPU somewhere experiences anything at all, he says.But maybe AI doesnt have to love us for real to be useful.If the question is whether one of these simulated beings could appear to care, and do it so expertly that it would have the same value to someone as being cared for by a human, that is perhaps not impossible, Togelius adds. You could create a good-enough simulation of care to be useful. The question is whether the person being cared for would care that the carer has no experiences.In other words, so long as our AI characters appear to care for us in a convincing way, that might be all we really care about.
    0 Comments 0 Shares 2 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The way we measure progress in AI is terrible
    Every time a new AI model is released, its typically touted as acing its performance against a series of benchmarks. OpenAIs GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI companys latest model in several tests.The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models scores against these benchmarks will determine the level of scrutiny and regulation they receive.It seems to be like the Wild West because we dont really have good evaluation standards, says Anka Reuel, an author of the paper, who is a PhD student in computer science at Stanford University and a member of its Center for AI Safety.A benchmark is essentially a test that an AI takes. It can be in a multiple-choice format like the most popular one, the Massive Multitask Language Understanding benchmark, known as the MMLU, or it could be an evaluation of AIs ability to do a specific task or the quality of its text responses to a set series of questions.AI companies frequently cite benchmarks as testament to a new models success. The developers of these models tend to optimize for the specific benchmarks, says Anna Ivanova, professor of psychology at the Georgia Institute of Technology and head of its Language, Intelligence, and Thought (LIT) lab, who was not involved in the Stanford research.These benchmarks already form part of some governments plans for regulating AI. For example, the EU AI Act, which goes into force in August 2025, references benchmarks as a tool to determine whether or not a model demonstrates systemic risk; if it does, it will be subject to higher levels of scrutiny and regulation. The UK AI Safety Institute references benchmarks in Inspect, which is its framework for evaluating the safety of large language models.But right now, they might not be good enough to use that way. Theres this potential false sense of safety were creating with benchmarks if they arent well designed, especially for high-stakes use cases, says Reuel. It may look as if the model is safe, but it is not.Given the increasing importance of benchmarks, Reuel and her colleagues wanted to look at the most popular examples to figure out what makes a good oneand whether the ones we use are robust enough. The researchers first set out to verify the benchmark results that developers put out, but often they couldnt reproduce them. To test a benchmark, you typically need some instructions or code to run it on a model. Many benchmark creators didnt make the code to run their benchmark publicly available. In other cases, the code was outdated.Benchmark creators often dont make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate.Another issue is that benchmarks are frequently saturated, which means all the problems have been pretty much been solved. For example, lets say theres a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model.One of the goals of the research was to define a list of criteria that make a good benchmark. Its definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them, says Ivanova. The issue is that there isnt one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. Thats very useful.The paper was accompanied by the launch of a website, BetterBench, that ranks the most popular AI benchmarks. Rating factors include whether or not experts were consulted on the design, whether the tested capability is well defined, and other basicsfor example, is there a feedback channel for the benchmark, or has it been peer-reviewed?The MMLU benchmark had the lowest ratings. I disagree with these rankings. In fact, Im an author of some of the papers ranked highly, and would say that the lower ranked benchmarks are better than them, says Dan Hendrycks, director of CAIS, the Center for AI Safety, and one of the creators of the MMLU benchmark. That said, Hendrycks still believes that the best way to move the field forward is to build better benchmarks.Some think the criteria may be missing the bigger picture. The paper adds something valuable. Implementation criteria and documentation criteriaall of this is important. It makes the benchmarks better, says Marius Hobbhahn, CEO of Apollo Research, a research organization specializing in AI evaluations. But for me, the most important question is, do you measure the right thing? You could check all of these boxes, but you could still have a terrible benchmark because it just doesnt measure the right thing.Essentially, even if a benchmark is perfectly designed, one that tests the models ability to provide compelling analysis of Shakespeare sonnets may be useless if someone is really concerned about AIs hacking capabilities.Youll see a benchmark thats supposed to measure moral reasoning. But what that means isnt necessarily defined very well. Are people who are experts in that domain being incorporated in the process? Often that isnt the case, says Amelia Hardy, another author of the paper and an AI researcher at Stanford University.There are organizations actively trying to improve the situation. For example, a new benchmark from Epoch AI, a research organization, was designed with input from 60 mathematicians and verified as challenging by two winners of the Fields Medal, which is the most prestigious award in mathematics. The participation of these experts fulfills one of the criteria in the BetterBench assessment. The current most advanced models are able to answer less than 2% of the questions on the benchmark, which means theres a significant way to go before it is saturated.We really tried to represent the full breadth and depth of modern math research, says Tamay Besiroglu, associate director at Epoch AI. Despite the difficulty of the test, Besiroglu speculates it will take only around four years for AI models to saturate the benchmark, scoring higher than 80%.And Hendrycks organization, CAIS, is collaborating with Scale AI to create a new benchmark that he claims will test AI models against the frontier of human knowledge, dubbed Humanitys Last Exam, HLE. HLE was developed by a global team of academics and subject-matter experts, says Hendrycks. HLE contains unambiguous, non-searchable, questions that require a PhD-level understanding to solve. If you want to contribute a question, you can here.Although there is a lot of disagreement over what exactly should be measured, many researchers agree that more robust benchmarks are needed, especially since they set a direction for companies and are a critical tool for governments.Benchmarks need to be really good, Hardy says. We need to have an understanding of what really good means, which we dont right now.
    0 Comments 0 Shares 2 Views