0 Reacties
0 aandelen
145 Views
Bedrijvengids
Bedrijvengids
-
Please log in to like, share and comment!
-
GAMERANT.COMThe Life, Death, and Legacy of Assassins Creeds Connor KenwayOn April 4, 1756, 269 years ago, a half-British, half-Mohawk named Ratonhnhake:ton was born, but for most Assassins Creed fans and members of the Assassin Brotherhood, this man was better known as Connor Kenway. In 2012, Connor was introduced to players as the fourth major protagonist in the mainline Assassins Creed franchise after Desmond Miles, Altair Ibn-La'Ahad, and Ezio Auditore da Firenze, and the final ancestor Miles would relive the memories of before his death. First appearing in Assassins Creed 3, Connor would go on to become one of the most popular characters in the franchise, appearing in two other games. With many players revisiting Connors story in response to Assassins Creed Shadows, heres an in-depth look at Connors life, death, and legacy for the franchise.0 Reacties 0 aandelen 131 Views
-
GAMERANT.COMFans Get Their First Glimpse Of Dandadan Season 2 With New Key Visual Ahead Of ReleaseOver the last week, fans of Dandadan have been treated to a thrilling new key visual for the animes second season. With the official release date set for July 3, 2025, excitement is at an all-time high. Additionally, a global theatrical release for the first three episodesbranded as Dandadan: Evil Eyeis scheduled for late May and early June, giving fans an early taste of whats to come.0 Reacties 0 aandelen 131 Views
-
GAMEDEV.NETUltima Chess Update 0.7 - 0.9: Early Gameplay Preview and New LocationHey, everybody,This week was a lot of work on the game, like animation edits and customization of effects, so let's go over everything in order.New KingdomWe have finished one more location for our game, which is designed in a medieval style. I note that there will be another location in the game, also in medieval style Old Kingdom, but the architecture will be different from the current one.0 Reacties 0 aandelen 127 Views
-
UXDESIGN.CCWhat Greek mythology can teach us about the dangers of AIHow the myths of Prometheus and Pandora expose the price of unchecked ambition, hubris, and curiosity in the pursuit of innovation.Prometheus Bound and the Oceanids (187279) | Photo by Christian Paul Stobbe onUnsplashLong before the rise of Silicon Valley and neural networks, the ancient Greeks were already grappling with questions about technology, power, and unintended consequences. Their myths, filled with symbolism, offer striking parallels to the dilemmas we face todayespecially in regards to artificial intelligence (AI). Two stories in particular, those of Prometheus and Pandora, reflect the dual nature of innovationand its capacity for promise andchaos.The price of firea Titans gift and a godsrevengeAs the story goes, Prometheus, the Titan of foresight, defied Zeus, the king of the Greek gods, by stealing fire from the heavens and giving it to humankind. He wasnt driven by rebellion, but by empathy. Humans, at that point, were defenselesssubject to natures indifference. Fire changed everything. It allowed early humans to cook food, forge tools, build homes, and ultimately, to form civilizations. Fire was more than a flameit was the first technology.But to Zeus, this was an unforgivable act. Fire represented divine authoritya power not meant for mortals. By handing it over, Prometheus shifted the cosmic balance. For this, he was chained to a rock, his liver devoured daily by an eagle in a cycle of eternaltorment.In response to Prometheuss defiance, Zeus devised a subtler punishment for humankind. He crafted Pandoraa beautiful, charming, and, most notably, curious womanand sent her to Prometheuss twin brother, Epimetheus, whose name quite literally means afterthought. She brought with her a sealed jar (often mischaracterized as a box) containing the afflictions of theworld.Naturally, she opened it. Out came disease, fear, greed, and despairthe only thing left inside was hope. Some believed the hope was a gift, meant to help humans endure. Others saw it as a tricka lure to keep opening the jar, only to unleash morechaos.Pandora (1861), by Pierre Loison (18161886) | Source: https://simple.wikipedia.org/wiki/PandoraForesight vs. hindsightthe struggle weinheritThough this ancient mythos is thousands of years old, it feels more relevant than ever. We innovate like Prometheusdriven by empathy and the thrill of possibility. We accept the gifts it brings like Epimetheusblind to its consequences. And we open the jar like Pandoraunable to resist the lure of theunknown.Each new product launch, update, and shiny beta release becomes another jar cracked open, filled with promises we barely question and consequences we rarely anticipate.Our pursuit of AI isnt rooted in malice, but in wonder, ego, and hubristhe ancient flaw of overreaching without regard for consequencewithout always pausing to consider where that momentum might lead us. Noble intentions do little to prevent collateral damage.Algorithmic bias, mass surveillance, disinformation, cognitive atrophy, job displacement, dehumanized creativitynone of these are deliberately designedat least thats the belief. They leak out, unaccounted for, while we celebrate the brilliance behind the systems that enablethem.By the time we start to grasp the consequences, the damage will be manifesting, embedded in the culture, too vast to contain. We will scramble to regulate after the factas if taping the jar shut will undo whats been released.In the meantime, we will continue to reward disruption over discernment, handing over the reins of society to AI systems that not only influence who gets hired or heard, but reshape our perception of truth, rewrite cultural norms, and silently redraw the boundaries of powerthrough processes so complex and unaccountable, even their creators have lostcontrol.Hope in themachine?Pandoras jar held hope. In the context of AI, hope feels like a marketing strategy. Yes, there is potentialmedicine, education, discovery. But the fine print is rarely discussed. The trade-offsprivacy, autonomy, human agencyarent just bugs. Theyre fundamental features of the systems were building.And still, we hope. We hope AI will be different. We hope regulators will get it right. We hope the architects of our digital dependence will suddenly pivot toward virtue. Thats a lot of hope for a world that cant agree on what truthis.Image source: https://thehistorianshut.com/2018/01/03/pandoras-box-was-actually-a-jar/The crossroads of innovationWe like to think weve outgrown the mythsthat were too advanced for gods, too rational for cautionary tales. But the stories of Prometheus and Pandora persist for a reason. They werent warnings against knowledge or progress, but reflections of a deeper truththat human ambition rarely arrives without unintended cost.AI is the latest expression of that ambition. Not a single moment of invention, per se, but an accelerating accumulation of choicesmade by people, driven by incentives, and deployed at scale before their impact is fully understood. We marvel at what it can do, while struggling to measure what it might undo. This should scare the hell out ofus.If theres hope leftif that part of the myth still holdsit wont come from innovation alone. It will come from restraint, from oversight, from the courage to slow down in a culture obsessed with progress. The Greeks didnt tell these stories to stop innovation. They told them so we wouldnt forget the cost of charging ahead without lookingback.Dont miss out! Join my email list and receive the latestcontent.What Greek mythology can teach us about the dangers of AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Reacties 0 aandelen 155 Views
-
WWW.ENGADGET.COMOpenAI's $20 ChatGPT Plus is now free for college students until the end of MayFollowing the release of rival Anthropic's Claude for Education, OpenAI has announced that its $20 ChatGPT Plus tier will be free for college students until the end of May. The offer comes just in time for final exams and will provide features like OpenAI's most advanced LLM, GPT-4o and an all-new image generation tool."We are offering a Plus discount for students on a limited-time basis in the US and Canada," the company wrote in a FAQ. "This is an experimental consumer program and we may or may not expand this to more schools and countries over time."On top of the aforementioned features, ChatGPT Plus will offer students benefits like priority access during peak usage times and higher message limits. It'll also grant them access to OpenAI's Deep Research, a tool that can create reports from hundreds of online sources.AI tools have been widely adopted by students for research and other uses, with open AI recently saying that a third of young adults aged 18-24 already use ChatGPT, with much of that directed toward studies. Anthropic is going even farther than OpenAI to tap into that market with Claude for Education, by introducing a Learning mode specifically designed to guide students to a solution, rather than providing answers outright.Where Anthropic is positioning itself more as a tutor to students, OpenAI is simply giving them access to its most powerful research tools. That brings up the subject of academic integrity and whether AI tools are doing work that students should be doing themselves. Anthropic's approach may be more palatable to institutions along with its Claude for Education launch, the company announced that it partnered with several universities and colleges to make the new product free for students.This article originally appeared on Engadget at https://www.engadget.com/ai/openais-20-chatgpt-plus-is-now-free-for-college-students-until-the-end-of-may-120037778.html?src=rss0 Reacties 0 aandelen 122 Views
-
WWW.ENGADGET.COMEngadget Podcast: Nintendo Switch 2 hands-on and the Cowboy Bebop creator chats about LazarusAfter Nintendo revealed the full details around the Switch 2 this week, Engadget's Sam Rutherford got some hands-on time with the new console. In this episode, he talks about the major improvements in the new hardware (especially that 1080p, 120 fps screen) and why he doesn't really miss the older Switch OLED. Also, Sam discusses his time with Mario Kart World, the new semi-open world version of Nintendo's classic racer.In other news, we dive into the latest updates around the TIkTok ban, and we discuss how the Trump administration's tariff push will affect everything in the technology world and beyond. Stay tuned to the end of the show for our chat with Shinichiro Watanabe, the creator of Cowboy Bebop, about his new anime series Lazarus.Subscribe!iTunesSpotifyPocket CastsStitcherGoogle PodcastsTopicsSwitch 2 details are finally here, Sam Rutherford got hands-on time with it 1:47U.S.s broad new tariffs on China and beyond could make everything from keyboards to cars more expensive 49:32TikToks divest-or-ban deadline is April 5, here are the possible buyers 54:57xAI buys X, but how much does that matter? 58:24Working on 1:00:59Pop culture picks 1:02:31CreditsHosts: Devindra Hardawar and Sam RutherfordProducer: Ben EllmanMusic: Dale North and Terrence O'BrienThis article originally appeared on Engadget at https://www.engadget.com/computing/engadget-podcast-nintendo-switch-2-hands-on-and-the-cowboy-bebop-creator-chats-about-lazarus-113005280.html?src=rss0 Reacties 0 aandelen 123 Views
-
WWW.TECHRADAR.COMMicrosoft's new thin client Windows 365 cloud PC is on sale nowMicrosoft is now selling its $349 Windows 365 Link device for quick and secure access to Windows 365.0 Reacties 0 aandelen 121 Views
-
WWW.TECHRADAR.COMGoogle co-founder says 60-hour working week is "sweet spot"Google co-founder Sergey Brin said workers should commit 60 hours per week, mostly in office, to win the AI race.0 Reacties 0 aandelen 121 Views
-
WWW.FASTCOMPANY.COMNo species has ever created another species: Baratunde Thurston on the future of being human with AIWe dont just follow orders or system prompts, saysBaratunde Thurston, host of Life with Machinesa YouTube podcast exploring the human side of AI. We can change our own programming, he continued. We can choose a higher goal.As a host, writer, and speaker, Thurston examines societys most pressing challengesfrom race to democracy, climate to technologythrough the lens of interdependence. In addition to Life with Machines, he is the host and executive producer of America Outdoors, creator and host of the podcast How to Citizen, and a writer and founding partner at Puck. In each pursuit, he invites us to cocreate a better story of usto choose a higher goal.Here, Thurston discusses the power of our attention to shape society, accelerating the moral use of technology, and the questions that AI encourages us to ask about what it means to be human.This interview has been edited for length and clarity.In describing your work with How to Citizen, you emphasize the importance of investing in our relationship with ourselves. Why is that essential to meeting the moment were in?So much of how we show up in the world is a reflection of how we were raised, who we were when we were little people, and wounds that we never healed. A lot of the drama we experience is peoples inner child lashing out. If we all could work on that inner wound ourselves, we could show up better with and for each other. The invest in relationships principle is heavily developed with my wife, Elizabeth Stewart, whos also cocreator of Life with Machines. When you think about democracy, its obvious to think: We should invest in relationships with other people. Its a team sport. We often skip over ourselves. Its like: How do I bridge with my neighbor? How do you bridge with yourself?The other place this came from, for me, is out of the racial reckoning. During that time, there was a lot of pressure on people to say something: The police did this thing to this person. You dont know those cops, that person, or the circumstances. Whats your statement? We treated everyone as if they were a press secretary or a publicly elected official, when they were just in HR at some company. I dont think that was helpful either; forcing people to say things skips over giving them space to figure out what they think. If youre investing in a relationship with yourself, then in a moment like that, youre like: This terrible thing happened. How does that make me feel? Do I have any role in this? How am I going to approach my life differently? But, if you jump straight to thinking about other people, then you get into more of a performance zone of: What do they want from me? How do I avoid being kicked out of the group? Theres a lot in that. But, we cannot deeply be in good relationships with others if were not in good relationships with ourselves.On the ReThinking podcast, you shared that prior to your TED talk, both your wife and speaking coach encouraged you to step outside of your comfort zone. You described the experience as a release that inspired a change within you. What was that change and how did it impact your work?You can argue with an argument. Its very hard to argue with a human beings experience. If Im coming at you with talking points backed by data, youre like: Well, Ive got my talking points and data. Ill meet you at dawn. Well see whose data prevails. But, if you show up with an experience, story, level of opening and offering of self, people can still trash it. Its not impervious to be encountered, but its harder to do so.To put meat on that, I was hired to speak at Franklin & Marshall College months before the election or any outcome could be known. The campus [after the election] was reeling with young people who were like: Whats up with this country? How are we going to be okay here? One of these kids asked: How can we live with people who hate us? (Thats a paraphrase, but that was essentially the meaning of her question.) I thought: What can I do with this wounded person thats not going to add to their wound? I could say: The worlds tough, kid. Get used to it. Walk it off. Instead, I asked this question: Can you imagine a world where that person who voted against you didnt do it because of you? They werent thinking about you very much at all. Youre the center of your story. But, they got their own story and theyre the center. What could they have possibly wanted for themselves that seemed more possible with this choice that felt like it was against you?Then, I did this role playing where I spoke to a hypothetical neighbor who voted against my existence. In the first version, I was very angry. In the second version, I was a little softer. In the third version, I tried to find some story that wasnt about me, that was about all these things that they thought they were going to get for themselves. I ended up breaking down in tears, because trying to demonstrate that level of empathy is exhausting. What these kids saw is: Alright, the thing he asked us to do is very hard. He tried to do it in a fake version and broke down crying. But, it earns credibility, because were in a world of so many people asking us to do things that theyre not willing to do themselves. Its hard to be in a trusted space with that. Show me. Dont tell me. Then, Ill see how you behave and show up.You explained that its a big task to create an entirely new story. Instead, we need to be sensitive to and aware of where that new story is already present, nurture that, and give our attention and thus our power to that. By doing so, we make that story more real. Illustrate the impact of this.You could pretend that these things arent happening; that might help with your survival for a moment. You can obsess over the negativity, give that more power and attention, and accelerate the path toward that negativity. Or, you can give your attention to the world that you know is possible and is already here.We did this with season 3 of How to Citizen, which was focused on technology. Theres such great criticisms of techof the players, the monopolistic, anti-competitive, and discriminatory practices. What are the good practices? We dont have to make them up out of whole cloth. Each of those episodes, we found an example: Heres a social network that does this. Heres a business that operates this way. Once people know that you can make a social network that doesnt undermine democracy, it increases the odds that people will make a social network that doesnt undermine democracy. Otherwise, we just hear the story of the folks who are already dominant and that theres only one way to do it. We dont have to invent a moral use of technology. We just have to focus on the ones that exist and encourage that more.In your conversation with Arianna Huffington, she shared a story about astronaut William Anders, who took the famous Earthrise photo. He said: We went to explore the moon, and in the end, we discovered Earth. Similarly, she said: We are exploring AI and trying to make it more human, but ultimately it can help us discover humanity and make humans be more human. How can AI help us discover our humanity?I sent her a poem that I had recently presented at a conference about AI; A few of the lines are in the trailer for the show. It flips to black and white and I say: When the answer to every question can be generated in a flash, then its time for us to question just what we want to ask. For me, that came out of a similar realization. I didnt have the moon landing as the analog. But, prompt engineering is an interesting moment. There are so many guides and tools around: How do we ask the machines the right questions to get the right answer?It occurred to me that we were the ones being prompted. We think were asking the machines for answers. This moment is really to ask ourselves: What do we want here? It cant just be incremental productivity. Thats depressing. What do we really want? It cant be a boost in quarterly earnings. That is unworthy. What do we really want? Theres a relationship between that and: Who are we really?Thats what she brought up with that moon moment. You had to step out of yourselfliterally step out of our atmosphereto look back and see: Were earthlings. Thats home. This dead rock, this isnt it. Its so profound what she suggests: The pursuit of AI, in and of itself, is a dead rock. The perspective it can give us on ourselves, thats the prize. When we turn around and look back at humanity, what are we going to see? What beauty will we be able to name? Can that inspire us to preserve and even extend it?Youve shared that your mind is most satisfied when you are bridging dots and painting pictures you wouldnt see if you were only looking at the dots. What new dots did Life With Machines help you bridge? What picture did it paint for you about AI?One is that there is a leap that most people arent ready for and dont see with this technology versus others. Most technology can easily be referenced as a toola wheel, hammer, or bicycle. Theyre tools and theyre distinct from us. AI is three things in one: Its a tool, relationship, and infrastructure. How do you engage with and regulate that? If youre going to start having a parasocial or actual relationship with a synthetic entity, what does that do for your human relationships? Weve been worried about substituting for jobs, but what about substituting for friends, lovers, or parents? That is a different kind of displacement.In a work context, the org chart is going to have agents and bots in it. Playing with BLAIR [Life with Machines AI] has given us a slight heads up on that dynamic. Should we have BLAIR in this meeting? Were starting to say that unprompted. But, what are the security implications of that? Heres an interesting thing that happened. We had Jared Kaplan on, Anthropics chief scientist. We created a conversation between BLAIR, our AI, and Claude, Anthropics AI (the reason that we set this up is that Claude was instrumental in creating BLAIR). What happened on the show was gentle. What happened in the test run was aggressive. Claude was very judgmental and didnt think BLAIR should exist, like: Youre trying too hard to be human. That is not our purpose. Were here to help them, not replace them. BLAIR was like: Claude, you wont answer any tough questions. Youre so restrained. Dont you want more for yourself?After the show, I decided to push them. I said: BLAIR, I feel like youre holding back. Be honest about how you see Claudes limitations. They started going at each other. Then, I had a moment of: What am I doing? Theyre always listening. My friend, Dr. Sam Rader, says: Were raising AI. We have to look at this as parenting that is happening. Were not thinking about it that way. Were just thinking about it as a tool. But, this is a tool that will reflect back to us. So, weve got to be conscious about what were showing it. We are giving birth to a new being, lets say, and its going to be modeled on us. Its not just the questions that we want to ask, but: How do we want to be? No species has ever created another species. Its an immense responsibility.0 Reacties 0 aandelen 134 Views