• Coaching.com: Membership Manager
    weworkremotely.com
    GENERAL INFORMATIONDepartment - MembershipReporting to Membership DirectorStart Date - ImmediateSalary - US$46,000 - $52,000 USD/annualCoaching.comCoaching.com is on a journey to harness the power of coaching and activate human potential worldwide. Our mission is to elevate the quality of professional coaching by providing a comprehensive platform that empowers coaches with best-in-class content and enables the delivery of coaching services seamlessly and at scale.ROLE PURPOSEWe are seeking an enthusiastic and experienced Membership Manager to lead the development and delivery of the Coaching.com Membership. In this pivotal role, you will ensure our membership program consistently exceeds expectations by delivering exceptional educational services; enhancing member engagement and driving sustainable growth. If you are proactive, confident, and detail-oriented, and have a passion for supporting professional development while fostering a sense of community among our coaching members, wed love to hear from you.RESPONSIBILITIESThe Membership primarily focuses on delivering online education and training to members through weekly webinars and community events. The Membership Manager will be responsible for the successful delivery of these events, as well as the overall management of the Coaching.com Membership. This includes planning, executing, and evaluating membership initiatives to ensure a high-quality experience. The role also involves streamlining project execution, enhancing services, and fostering strategic growth through collaboration with cross-functional teams.Key Responsibilities Include:Membership Project Management:Plan and manage membership-related projects from conception to completion.Develop and maintain project timelines, ensuring deliverables are met within scope and budget.Coordinate with internal teams and external stakeholders to streamline project execution.Monitor and report on project progress, addressing risks and obstacles proactively.Schedule meetings, prepare agendas, take notes, and share follow-up action steps with involved stakeholders.Create and implement new processes and workflows for the Membership as required.Membership Delivery:Oversee the onboarding process for new members, ensuring a smooth and welcoming experience.Manage the creation and delivery of membership resources, tools, and materials.Ensure timely communication and updates to members regarding events, benefits, and initiatives.Collaborate with the marketing team to drive membership acquisition and retention strategies.Support live membership sessions on Zoom, including session support, hosting, logistical coordination, scheduling, and content preparation.Manage HubSpot emails for the Membership Team, including writing copy, approving designs, creating and scheduling email campaigns, tracking performance metrics, and optimizing communications.Organize and deliver online events, workshops, and networking opportunities tailored to member needs, including the annual Coaching.com Summit.Team Leadership:Lead and manage Membership Coordinators to effectively execute membership initiatives, including:Volunteer CoordinationSpeaker ManagementCustomer SupportKEY COMPETENCIESProject Management Experience: At least 3 years of experience in digital project/program management in a remote environment. Experience in digital events is desirable but not required.Strong Communication: Excellent interpersonal, written, and verbal communication skills in English, with the ability to work with diverse customers and stakeholders globally.Organizational Skills & Analytical Thinking: Data-driven and meticulous in your organization and approach to projects. You thrive on structure and efficiency.Results Driven: A dynamic self-starter who can work independently, using initiative to achieve goals and find creative solutions.Collaboration & Leadership: Experience working with cross-functional teams and motivating others to achieve goals.Adaptability: Thrives in a fast-paced, high-intensity work environment, and proactively manages challenges in remote collaboration.Confident in Public Speaking: Comfortable and composed speaking to large audiences in live online sessions, meetings, and events.Technical Proficiency: Proficiency in Zoom, Google Workspace, and project management tools such as Asana.BENEFITSFlexible Work Environment: Remote work with flexible hours (at least half of your working hours must overlap with Eastern Time/New York time).Professional Growth: Access to continuous learning opportunities and resources.Collaborative Culture: Be part of a supportive, innovative team.Industry Impact: Contribute to a platform that empowers coaches and promotes professional development.Competitive Compensation: A competitive salary and benefits package tailored to your expertise.Unlimited Paid Time Off: Enjoy unlimited* PTO to relax and recharge (*at the discretion of your Manager).HOW TO APPLYPlease fill out the APPLICATION FORM, following the outlined instructions.In your application, youll:Provide basic information.Upload your resume/CV.Upload a cover letter (23 paragraphs) describing why you want to work with us and what specifically qualifies you for this position.Submit a short video (35 minutes) answering the following questions:Why are you excited about this role?What is one thing you can do better than 10,000 other people?What skills do you bring that would help you succeed in this role?If your application is shortlisted, you will be invited to participate in an interview conducted via Zoom. Good luck!
    0 Comments ·0 Shares ·47 Views
  • Today's NYT Connections Hints, Answers and Help for Jan. 21, #590
    www.cnet.com
    Looking for the most recent Connections answers? Click here for today's Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.The purple category in the NYTConnections puzzle isn't usually the toughest, but I found it to be a fun and easier one than usual today. You might, too. Read on for clues and today's Connections answers.The Times now has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. And players who are registered with the Times Games section can now nerd out by following their progress, including number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.Read more:Hints, Tips and Strategies to Help You Win at NYT Connections Every TimeHints for today's Connections groupsHere are four hints for the groupings in today's Connections puzzle, ranked from the easiest, yellow group to the tough (and sometimes bizarre) purple group.Yellow group hint: Insult.Green group hint: Tiny thing.Blue group hint: Needed in the North.Purple group hint: Doom is one, and so is McCoy.Answers for today's Connections groupsYellow group: Putdown.Green group: Small likeness.Blue group: Used to clear snow.Purple group: Doctors in pop culture.Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English WordsWhat are today's Connections answers? NYT/Screenshot by CNETThe yellow words in today's ConnectionsThe theme is putdown. The four answers are barb, crack, dig and slight.The green words in today's ConnectionsThe theme is small likeness. The four answers are doll, figure, miniature and model.The blue words in today's ConnectionsThe theme is used to clear snow. The four answers are plow, salt, sand and shovel.The purple words in today's ConnectionsThe theme is doctors in pop culture. The four answers are No, Octopus, Pepper and Who.
    0 Comments ·0 Shares ·47 Views
  • Capcom updates on Monster Hunter Wilds performance improvements, is aiming to lower PC GPU requirements
    www.eurogamer.net
    Capcom updates on Monster Hunter Wilds performance improvements, is aiming to lower PC GPU requirementsUth Duna know if it can.Image credit: Eurogamer News by Matt Wales News Reporter Published on Jan. 20, 2025 Excitement continues to mount ahead of Monster Hunter Wilds' release next month, but if there's one big question mark around the whole thing, it's just what sort of performance improvements Capcom has been cooking up since its rather wobby public beta. And the publisher has now given a brief update on performance, alongside confirmation it's "looking into" lowering the game's recommended GPU requirements on PC.Monster Hunter Wilds' first public beta went live last October, and it immediately became clear there was still work to be done on the performance front. Players soon took to the internet highlighting everything from erratic framerates, even on decently beefy PCs, to unfortunate visual glitches making the whole thing look like Capcom's lead platform was the N64.At the time, the publisher moved to remind players Monster Hunter Wilds was "still in development", offering assurances the full game was "already in a more improved state compared to the beta test", and that it was "committed to ensuring the quality of the final product." There was hope among fans some of these improvements might be evident in Monster Hunter Wilds' second open beta, scheduled to run from 7th-9th February, but Capcom recently confirmed that wouldn't be the case.Monster Hunter Wilds gameplay preview.Watch on YouTubeThe company has, though, now provided a short update on performance, with Monster Hunter's official German social media channel recently posting (as per Google translate) that the team "has been addressing rendering issues from the open beta test and is continually working on improvements for launch". These assurances were accompanied by a video offering what Capcom called a "look at our updated Prioritise Performance more for PS5", albeit not a particularly illuminating one given the less-than-stellar quality of X's video player. Prioritise Performance mode (AKA Prioritise Framerate) targets up to 60fps on PS5 and Xbox Series X, compared to 30fps when using Prioritise Graphics mode.Capcom also noted it's aiming to improve PC performance in a "similar way", and that it's additionally "looking into whether we can lower the recommended GPU requirements". The publisher raised a few eyebrows last September, when it revealed Monster Hunter Wilds would require an RTX 2070 Super/RTX 4060/RX 6700XT to reach 60fps (with Frame Generation) at 1080p, so word there may be improvements here - particularly given the open beta's erratic performance on mid-range hardware - is encouraging.Monster Hunter Wilds launches for PC, Xbox Series X/S, and PS5 on 28th February, and the Character Creation, Story Trial and Slay Doshaguma Quest seen in last year's first open beta will all be available again when beta #2 kicks off earlier in the month.Eurogamer's Ed Nightingale was quite taken with Monster Hunter Wilds' "familiar thrill of the hunt, player customisation, and extravagant combat" when he went hands-on in August, so hopefully Capcom can iron our those remaining wrinkles before launch.
    0 Comments ·0 Shares ·48 Views
  • Diablo 4 Switch 2 port allegedly in the works for a match made in heaven
    www.videogamer.com
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Contents hide While current players get ready to enjoy Diablo 4 Season 7 and enjoy the addition of Witch Doctor powers without a Witch Doctor class, Blizzard is reportedly working on a Diablo 4 Switch 2 port to take full advantage of Nintendos finally-announced next-gen handheld.With the new Nintendo handheld/console hybrid bringing massively upgraded performance, magnetic joy-con and more, the new system should have more than enough power to run some form of Diablo 4. After all, the game does run on a base Xbox One.Diablo 4 Switch 2 portAccording to eXputer reporter and YouTuber eXtas1s, via VGC, a Nintendo Switch 2 port of the latest Diablo game is in the works at Blizzard. Alongside adding credence to the leaks of ports for Halo: The Master Chief Collection and Microsoft Flight Simulator, a number of other titles are also allegedly in the works.According to what Ive been told, Diablo 4 is already in the works for Nintendo Switch 2, the reporter claimed. So it wouldnt be surprising at all if Diablo 4 was released in 2025 on Switch 2.The Diablo series has already been massively successful on the Nintendo Switch platform with both Diablo 3 and Diablo 2 Resurrected seeing huge sales and player counts on the handheld, it would make perfect sense to bring the fourth game to the Nintendos new handheld.With the Nintendo Switch 2 primed to be as computationally powerful as a PS4 Proalbeit with support with more modern rendering featurestheres no worry that the latest Diablo game would struggle to run on the new handheld.An offline mode?If Diablo 4 does come to the new Nintendo handheld, theres hope that Blizzard will finally add an offline mode to the game to support the machine. After all, theres nothing more annoying in Diablo 2 Resurrected than having to restart the game after putting your online character in standby for 30 seconds.However, Blizzard has not announced an offline mode for Diablo 4 at the time of writing, and considering the games penchant for online seasons and microtransaction content it seems unlikely thatll happen. However, we can always hope.For more Diablo coverage, read about how Elon Musk has finally confirmed he cheats at both Blizzards game and Path of Exile 2. Additionally, check out why the new Switch wont play every game for the original console. Diablo 4Platform(s):PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/XGenre(s):Action, Action RPG, RPGSubscribe to our newsletters!By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.Share
    0 Comments ·0 Shares ·47 Views
  • Team Cherry Reconfirms Hollow Knight: Silksong Is Actually Real In Small Development Update
    www.nintendolife.com
    After weeks of rumours.The Hollow Knight: Silksong community has been ablaze in recent weeks with theories and rumours aplenty about when the hotly-anticipated sequel will finally release. While none are yet to come to fruition, Team Cherry has provided a tiny development update, just to reassure everyone that the game actually does exist and it "will release" one day.Last week, Hollow Knight co-creator William Pellen updated his Twitter profile picture to an image of a cake. For most fan bases, this wouldn't be enough to bat an eyelid, but when you've been waiting this many years for news from Hallownest, every change matters. The TL;DR of it is that fans traced this image back to an online recipe dated to 2nd April (the day of the Switch 2 Direct), and made the logical leap that this will be a big day for Silksong news.Read the full article on nintendolife.com
    0 Comments ·0 Shares ·47 Views
  • US safety regulators expand Ford hands-free driving tech investigation
    techcrunch.com
    A U.S. federal safety regulator has upgraded its investigation into Fords hands-free advanced driver assistance system known as BlueCruise a required step before a recall can be issued.The National Highway Traffic Safety Administrations Office of Defects Investigation launched a probe into Ford BlueCruise last April after the agency confirmed the system was active in Ford Mustang Mach E vehicles involved intwo fatal crashes. In both instances, the Mustang Mach E cars hit stationary vehicles. NHTSA issued a notice this week that it has upgraded the investigation to an engineering analysis. This means the agency will dig deeper into BlueCruise and its potential limitations, which includes vehicle evaluations, reviewing additional technical information, and performing additional analysis of related crashes and non-crash reports.An estimated 129,222 Ford Mustang Mach E vehicles are equipped with BlueCruise, according to the regulator. Ford did not respond to a request for comment. (TechCrunch will update the article if that changes.)The agency said its initial investigation found BlueCruise has limitations in the detection of stationary vehicles in certain conditions. Those limitations include the potential to falsely detect stationary objects at long distances when the Ford vehicle is traveling at or above 62 miles per hour.Additionally, system performance may be limited when there is poor visibility due to insufficient illumination, NHTSA said.Ford debuted BlueCruise in 2021 on the 2021 F-150 pickup truck and certain 2021 Mustang Mach-E models. The hands-free feature uses cameras, radar sensors, and software to provide a combination of adaptive cruise control, lane centering, and speed-sign recognition. BlueCruise and competitor GMs Super Cruise systems are both hands-free, although an in-cabin camera monitors drivers to ensure their eyes are on the road.These systems are considered competitors to Tesla Autopilot, which still requires the drivers hands to remain on the wheel. Autopilot and the upgraded Tesla Full Self-Driving software are still considered less constrained than Ford BlueCruise, which only works on certain pre-mapped highways. Last October, NHTSA also opened an investigation into Teslas so-called Full Self-Driving (Supervised) software after four reported crashes in low-visibility situations including one where a pedestrian was killed. That investigation is ongoing.
    0 Comments ·0 Shares ·44 Views
  • Silo fxpodcast DOP Ed Moore
    www.fxguide.com
    Silo 2Season 2 of Silo on Apple TV+ is set to deepen the intrigue of the underground dystopian world that captivated audiences in its first run. Based on Hugh Howeys based Silotrilogy of novels (Wool,Shift, andDust).Silo has become a flagship show for Apple+, combining compelling visual storytelling with stunning visual effects. The first season introduced audiences to the titular Siloa massive, self-contained underground community where humanity survives after an apocalyptic event. With secrets layered at every level of the structure, the series is as much about the physical environment as it is about the psychological impact of isolation and control.The cinematography in Silo is a critical element in establishing its unique tone. By leveraging tight, claustrophobic framing and dim, moody lighting, the series underscores the oppressive environment of the Silo while reflecting the characters psychological confinement. Alternately, moments of expansive compositions and dynamic camera movement are used to highlight rare glimpses of freedom or rebellion, creating a visual rhythm that mirrors the emotional beats of the story. This meticulous approach elevates the storytelling, ensuring every shot contributes to the immersive and unsettling atmosphere. In this weeks fxpodcast we breakdown the series with one of the shows principle Directors of Photography, Ed Moore, BSC. As with most tentpole high budget series, a set of crews alturnate per episode, Ed Moore and Director Amber Templemore worked on multiple episodes especially in the later half of season 2. In Silo 2, the production design and visual effects work hand-in-hand to create a tangible, oppressive environment. The shows commitment to realism is apparent in every frame, with CG work often invisible but essential to the storys immersive quality. The lead visual effects were by ILM, as Ed Moore discusses in the podcast, while the set was vast, ILM had the task of working closely with the DOPs to extend the set and integrate the visuals. Season 2 promises to expanded the visual and narrative elements of the show, delving deeper into the mysteries of the Silos origins and the larger world outside. The challenge for the creative teams was to balance the shows grounded aesthetic with the need to broaden the visual scope of the show, showing more of the post-apocalyptic landscape beyond the Silos walls.From a technical perspective, Silo stands out for its restrained yet precise use of visual effects combined with dark and complex visual plate photography. An example is the opening shot of Season 2s 5th episode. As Ed Moore discusses in the fxpodcast, the shot was a remarkable combination of a very cleverly designed and executed live action camera move with ILM providing extensive set extension. Ed posted the following clip on Instagram showing the behind the scenes camera department engineering and execution.View this post on InstagramA post shared by Ed Moore BSC (@edmooredop)Ed Moore has provided a set of his own personal black and white photography behind the scenes of season 2. Below is a featurette showing the complex filming sets and extensive production design.
    0 Comments ·0 Shares ·49 Views
  • This Generative AI Prompting Technique Uses Multiple Expert Personas To Derive First-Class Answers
    www.forbes.com
    Using multiple expert personas in generative AI is relatively easy to do and has sizable payoffs.gettyIn todays column, I showcase a valuable prompting technique that uses multiple expert personas to push generative AI and large language models (LLMs) toward first-class or top-notch answers. The approach is straightforward. You invoke the intrinsic capabilities of AI LLMs to undertake persona simulations, and cleverly combine this facility with a focus on the personas as experts in whatever field of inquiry you are interested in. Various crucial upsides and downsides need to be observed.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).Prompt Engineering And PersonasIn a previous posting I explored over fifty prompt engineering techniques and methods, see the link here. Among those myriad approaches was the use of personas, including individual personas and multiple personas, as depicted at the link here, and the much larger scale mega-personas at the link here. Personas are a powerful feature available in LLMs, yet few users seem to be familiar with the circumstances under which they should consider invoking the capability.A persona consists of telling generative AI to pretend to be someone and simulate what that person might know or say. No special setup or elaborate instructions are needed to get this undertaken. I will be providing you in a moment with prompts that show how this is readily performed.A quick example of the value of personas might be helpful here.One of the most popularly invoked personas entails generative AI pretending to be Abraham Lincoln. A teacher might tell a generative AI app such as ChatGPT to simulate the nature of Honest Abe. Students then access the pretense or persona. In an amazing flair, the AI seemingly responds as we assume Lincoln might have responded.MORE FOR YOUOf course, this is based simply on the numerous speeches, written materials, and other collected writings that suggest what he was like. The AI has pattern-matched computationally on those works and mimics what Lincolns tone and remarks might be. Anyone using a generative AI persona needs to keep their wits about them and realize that the conversation or engagement is nothing more than a mimicry or imitation. Though the AI appears to often convincingly fake the nature of the person, it is all still a computational simulation.Generative AI And Subject Matter ExpertiseA clever way to lean into personas consists of telling generative AI to pretend to be an expert in some field of study.Imagine that you are interested in climate science as a specific domain. You could tell generative AI to pretend to be an expert in climate science. Notice that you dont have to name a particular person, such as in the case of naming Abraham Lincoln as the subject of the persona. A persona can be unnamed. If the persona is not based on a known person, youll need to be mindful and state what the persona is supposed to be modeled on.I might tell generative AI to pretend to be a climate science expert. Voila, that alone would be sufficient to get the AI to simulate that specialty persona. One issue to keep in mind is whether the generative AI that you are using has sufficient data and has sufficiently patterned on that data to adequately represent the field of interest. Be cautious since the AI might be quite shallow and yet portray the expertise as though it is in-depth.For whatever kind of expertise you want to base a persona on, a vital question is whether the AI has been data-trained extensively on that topic. Most of the major generative AI apps are broadly trained across a wide swath of the Internet. This tends to make the AI so-called mile-long and an inch deep in terms of expertise.Do not assume that some desired expertise is always at the ready.It is usually wise to ask the AI what level of proficiency it seems to have in whatever topic you are exploring. This doesnt guarantee that the AI will be forthright, and you might get either a bland answer or a false or misleading response. If possible, consider asking direct questions on the topic and have a basis to compare the answers, such as doing online searches or having in-hand materials to corroborate what the AI opines on the topic.Another common means of boosting AI in a field of interest would be to feed or import content on the topic directly into the generative AI. The use of retrieval-augmented generation (RAG) and in-context modeling can aid the AI in moving up the ladder in terms of expertise on a specific topic, see my discussion at the link here.Invoking Multiple Expert PersonasWe are now then ready to combine the notion of personas, expertise, and the idea of using multiple instances.If you invoke just one persona, lets say it is an expert in climate science, there is a strong chance that the generative AI will give a particular bent to that persona. It might be that the persona is mainly shaped around the atmospheric and weather facets of climate science. Thats fine, except that you could end up getting responses that are confined to that arena. The persona might skip considering other areas such as the economics of climate science.Aha, this brings us to the notion of invoking multiple expert personas, all at once.You can pretend that a room full of experts is being convened. The instructions to the AI are that multiple expert personas are to be defined and used simultaneously. You can either let the AI choose what those personas will consist of, or you can shape the direction of each persona. This depends on what you are trying to accomplish with the simulation.I will next walk you through some examples to highlight the tradeoffs of how this is accomplished.Getting Underway With Multiple Expert PersonasI went ahead and logged into ChatGPT by OpenAI to come up with an example for you. You might find it of keen interest that ChatGPT garners a whopping 300 million weekly active users. That is a staggering amount of usage.I briefly conducted an additional cursory analysis via other major generative AI apps, such as Anthropic Claude, Google Gemini, Microsoft Copilot, and Meta Llama, and found their answers to be about the same as that of ChatGPT. Ill focus on ChatGPT but note that the other AI apps generated roughly similar responses.Lets get underway.My entered prompt: I want you to pretend to be multiple experts. I will tell you what field of expertise they have. I will also tell you how many experts there are. Your job will be to then answer my associated questions by pretending to be those experts. Do you understand these instructions?Generative AI response: Yes, the instructions are very clear. I will wait for you to tell me what the field of expertise is, and how many experts there are to be involved. I will then answer your question based on those personas. Let me know when you are ready to proceed.Take a close look at the prompt that I entered.You can see how simple it is to get multiple personas on the table. I laid out for the AI that I want the AI to pretend to be multiple experts and that I will say what the area of expertise is. Furthermore, I clarified that the purpose for doing this is so that I can ask questions of the simulated experts and get (hopefully) suitably informed responses accordingly.That is a handy-dandy templated way to get the matter underway and you are welcome to use that same prompt in your endeavors.Taking The Experts For A Test DriveLets try out the multiple experts by stipulating a field of interest. I went with climate science for this example. In addition, I will pose a pressing question for the experts.My entered prompt: I want you to pretend to be three experts in the field of climate science. My question is this: What is the most pressing issue caused by climate change?Generative AI response: To confirm your request, I will establish three personas that are experts in climate science. I will refer to them as: (1) Dr. Green, an ecologist specializing in biodiversity, (2) Dr. Blue, an atmospheric scientist focusing on weather patterns, and (3) Dr. Brown, an economist studying the impact of climate change on global markets. They are to answer the question from each expert's perspective about what the most pressing issue is caused by climate change. Should I proceed?You can see that I opted to explore the field of climate science.In this case, before I invoked the personas, I had in the past asked numerous questions about climate science and knew that the generative AI had done a semblance of pattern-matching on climate science topics when it was initially data-trained. Im not suggesting that the AI is truly an expert and only mentioning that I did some pre-work to make sure that at least a minimum amount of data was scanned and patterned on in general.Observe what the AI did in response to my request.I had stated that I wanted three expert personas in climate science. Sure enough, the AI echoed to me that it would simulate three instances. I hadnt said what their specific detailed expertise might each be. The AI opted to go with three experts, each having a different subspeciality, consisting of an ecologist, an atmospheric scientist, and an economist.If I didnt want the personas to be on those subtopics, I could easily enter another prompt and tell the AI that the expertise choices should be changed. The AI helpfully opted to give names to the three expert personas (Dr. Green, Dr. Blue, Dr. Brown), making it easier for me to refer to each one specifically (the naming seems reminiscent of the now-classic movie Reservoir Dogs if you know what I mean).Getting An Answer From The Expert PersonasI will go ahead with the three expert personas and let the AI derive an answer to my posed question.Here we go.My entered prompt: Yes, please proceed, but be brief in their respective responses.Generative AI response: According to Dr. Green (ecologist), the most pressing issue caused by climate change is the rapid loss of biodiversity. Dr. Blue (atmospheric scientist) indicates that the most urgent issue is the intensification of extreme weather events. Meanwhile, Dr. Brown (economist) says that the biggest concern is the economic fallout of climate change. Would you like me to expand on any of those views?My entered prompt: Provide more detail by Dr. Brown on the economic ramifications.Generative AI response: Dr. Brown indicates that disruptions to agriculture, increased costs of disaster recovery, and forced migration due to rising sea levels will strain global markets. Developing nations are disproportionately affected, as they often lack the resources to adapt, leading to a widening wealth gap and global instability."I told the AI to be brief in the AI responses.It is up to you whether you want to get short answers or long answers. I usually prefer to start with short responses and then see whether the AI is on target. The problem with getting long answers at the get-go is that if you are paying for the use of AI, you might needlessly be racking up online costly processing cycles. I like to dip my toe in and then decide whether to go expansively.After getting a short answer from each of the three expert personas, as combined into one response, I decided to see what else Dr. Brown, the economist persona, might have to say on the topic. The AI responded accordingly.Big Question About Consolidating ExpertsOne of the toughest aspects of using multiple expert personas entails how to end up with a final answer. The simplest approach involves the AI merely stating what each expert persona had to say. This can be combined into one final response.For example, I asked to see a cohesive final answer.My entered prompt: I want you to combine the three answers into one cohesive final answer.Generative AI response: "The most pressing issue caused by climate change is its multifaceted impact. From an ecological standpoint, the rapid loss of biodiversity due to habitat destruction and shifting climates destabilizes ecosystems that provide essential services like pollination and carbon sequestration (Dr. Green). This ecological imbalance feeds into atmospheric challenges, as these weakened ecosystems struggle to regulate weather and climate, contributing to more frequent and severe extreme weather events such as hurricanes, floods, and droughts (Dr. Blue). Disrupted agricultural systems, soaring disaster recovery costs and forced migration amplify global inequality and strain financial resources, particularly in developing nations that lack the means to adapt effectively (Dr. Brown).The problem is that the presumed cohesive final answer by the AI is still disparate answers that happen to be lumped together.Without anthropomorphizing AI, lets shift gears and consider what happens with human experts. If you get human experts together and ask them to answer a tough question, the odds are they will each have a particular opinion. You wont necessarily get just one answer, unless the question at hand is something that lends itself to solely one answer, such as perhaps a numeric-oriented question that involves doing calculations and arriving at a single value.There is an entire body of literature on how to get human experts to coalesce toward one unified answer on a given topic. You can have them take a vote. They can be given metrics or boundaries that essentially force them into an agreed-upon answer. And so on.The same somewhat applies to the invoking of generative AI multiple expert personas, in the sense that youll need to tell the AI how it is to combine the disparate answers. You can take the route of specifically stipulating what the method should be. An easier approach is to force the AI to decide how to do so.In this case, as shown next, I went with a blending approach of allowing the AI to mix the expert personas responses as though the answer was derived from a single source.My entered prompt: You are still providing three separate recommendations. Try to blend them into one overall unified recommendation.Generative AI response: "The unified recommendation is to prioritize large-scale investments in nature-based solutions, such as reforestation, wetland restoration, and sustainable land management. These efforts address biodiversity loss by protecting habitats and fostering ecosystems that support life, while simultaneously mitigating extreme weather impacts by stabilizing climates and reducing the frequency and intensity of disasters. Additionally, such investments can drive economic growth by creating green jobs, enhancing agricultural productivity, and lowering the long-term costs of climate adaptation and recovery.Dealing With Generative AI MyopiaA disconcerting caveat of using generative AI to simulate multiple expert personas is that the AI is dipping into the same data set and pattern-matched data pool for each of the simulated personas.Heres why that raises the eyebrows. You probably assume that each persona is independent of the other ones. Unfortunately, thats not true. The AI is shaped around a large-scale data structure and is based on large-scale data training, all of which ends up as one gigantic pool. Each persona is a slice of that same pool. The gist is that the personas arent going to be independent, and they are bound to lean in similar directions. Any biases are likely to be carried into each of the personas.You can tell the AI to not allow any such bias to arise, but thats still problematic and you are going to get a likely false or misleading indication that the AI has abided by your request. There are prompting techniques that can help to try and somewhat soften the blow of those biases, see my description at the link here.A more compelling approach involves using more than one generative AI.The chances are that a different generative AI will have a different semblance of the pattern-matching and the data was used during the initial setup. As such, you are potentially going to get somewhat different answers. I say that this is only somewhat because most of the AI makers are nonetheless scanning much of the same parts of the Internet, see how this has led to a shared indication that has some disturbing consequences, per my discussion at the link here.Using Additional Generative AI AppsBased on the noteworthy qualms that my three expert personas are only within the confines of the one generative AI app that I was using, I opted to log into a different generative AI app and ask the same question that I had posed earlier.I then brought that answer into my underway online conversation with Dr. Green, Dr. Blue, and Dr. Brown, cheekily adding the answer as though it came from Dr. Purple.Heres what happened.My entered prompt: I went ahead and asked a different generative AI to provide one expert opinion or persona on the same topic. I will refer to this persona as Dr. Purple. Here is what the AI said: Focus on accelerating the transition to renewable energy systems, such as solar, wind, and geothermal power, while simultaneously phasing out fossil fuels. I want you to now include that fourth opinion and tell me how it impacts the three experts you are simulating.Generative AI response: The approach of Dr. Purple addresses biodiversity loss indirectly by reducing habitat destruction linked to fossil fuel extraction and mitigating carbon emissions that drive climate instability (Dr. Green). It curbs extreme weather events over time (Dr. Blue) by slowing the progression of global warming. Economically, it promotes energy security, reduces long-term energy costs, and creates millions of jobs in the clean energy sector (Dr. Brown). By targeting the root cause of climate changegreenhouse gas emissionsthis recommendation provides a scalable solution with long-term benefits for ecosystems, weather stability, and global economies. This perspective shifts focus to energy systems, providing an actionable and complementary strategy to the previous recommendations.You can see that the generative AI sought to intertwine the answers of all four expert personas.Having Many Experts Has Its Ups And DownsCongratulations, you are now ready to try out the use of multiple expert personas.Id suggest that you pick a topic that you know something about and start with that as your means of experimenting with multiple expert personas. This allows you to readily gauge whether the responses are making sense. It will prepare you for coping with subject areas that you arent familiar with and when opting to involve multiple expert personas.During my classes on prompt engineering, I place the use of multiple expert personas into the advanced portion of the course. The technique is something you wont necessarily use constantly. The aim would typically be when you are examining a topic that youve not previously considered and for which you find useful seeing multiple perspectives. One venue in which this approach especially shines is in an educational setting, such as having multiple experts debate a legal issue such as a portion of the constitution. It is almost like having a crew of legal beagles pontificating on a piece of the law.Ill close this discussion on multiple expert personas with a few enlightening quotes from some experts.Franklin D. Roosevelt famously said this: There are as many opinions as there are experts. This pertains to a comment that I mentioned earlier. Unless the question posed has a singular final answer, the likelihood is that your multiple expert personas are going to provide a multitude of viewpoints and not especially land on a single end-all response. Please keep that in mind.Finally, the great Thomas Jefferson made this illuminating remark: Difference of opinion leads to inquiry and inquiry to the truth. May your use of generative AI entail abundant and probing inquiries, and may those inquiries, in turn, lead to the truth.Thats mindful sage advice in this modern-day era of advanced AI.
    0 Comments ·0 Shares ·47 Views
  • EA announces Origin end of support, urges users to transition to the EA app
    www.techspot.com
    The EA app is our fastest and lightest PC client to date. With the new streamlined design you will easily find the games and content you're looking for and discover your new favorite games. With automatic game downloads and background updates you can ensure that your games are ready to play when you are.Build your ultimate friends list by connecting your EA account to other platforms and services like Steam, Xbox and PlayStation. You'll be easily recognizable with your custom unique ID. Once you're connected, know what and when your friends are playing so you can jump in and play together.For our Origin players, we've worked hard to make the transition to the EA app as simple as possible. We will be inviting you to make the move soon, and by the time you receive your invite, all your games and content, including games previously installed, will be ready and waiting for you on the EA app. Your local and cloud saves will transfer forward ensuring you can pick up where you left off. Your friends list transfers, too, so you won't have to worry about remembering all those player IDs.The EA app is now available for supported Windows PCs. MacOS players - Origin for Mac continues to be the place you can access your games.FeaturesDesigned for speedThe fast, optimized platform makes it easier than ever to discover, download, and play. So jumping into your game takes less time and fewer clicks.Built to connectLink your EA Account with your favorite gaming platforms to import friends lists and play together.Discovery at your fingertipsSee what your friends are playing, find EA's newest releases, and add them to your collection within a few clicks.Discover new games & play old favoritesExplore a huge, evolving library of titles ranging from indie darlings to triple-A hits.What's NewSignificantly improved My Collection loading speed, particularly for larger libraries. Everything started moving much more quickly once we switched over from the Dewey decimal system.Game Hub background videos can now be paused. From your Settings, you can choose to play or pause background videos automatically when visiting a Game Hub. Or you can pause background videos manually, whatever you like.Updated the main navigation to improve discoverability of the Home and Browse pages. The main navigation needed an overhaul because, before that, we just had to hand out maps and a compass, and hope for the best.Crash fixes for edge cases when launching games, launching the application, and exiting the application. Some of the feuding plants and zombies got a little carried away during battle and brought the fight into the app's framework, but we were able to detangle the roots and cart off the undead without any casualties.Other bug fixes and improvements.
    0 Comments ·0 Shares ·49 Views
  • I love Google Gemini, but Ill take Apple Intelligence any day of the week
    www.digitaltrends.com
    Table of ContentsTable of ContentsActivating Apple Intelligence and GeminiGemini vs Apple Intelligence: how theyre similarKey differences between Gemini and Apple IntelligenceUnderneath: Gemini Advanced vs ChatGPTOne more thing: where Apple Intelligence gets it so rightWhich do I prefer: Gemini or Apple Intelligence?If youre looking for the best AI experience on a phone, chances are two different AI makers come to mind. For the iPhone 16, Apple Intelligence is the answer, while for Pixel 9 series and the best Android phones its Googles Gemini. Of course, you can also download Gemini as a standalone app on the iPhone, but Apple Intelligence is the default AI option.Both companies offer a range of nearly identical features, at least in what they promise to offer, but there are also nuanced differences. Google Gemini is mostly focused on using AI to help you create, edit, and generate content. In contrast, Apple Intelligence focuses more on personal use cases and integration across a range of apps.Recommended VideosIve been using both Gemini and Apple Intelligence for months and both AI services have their pros and cons. After testing both for months, heres what Ive found.RelatedGemini Advanced on the Google Pixel 9 Pro Fold. Andy Boxall / Digital TrendsActivating either AI platform is fairly intuitive, especially if youve used an Android phone or an iPhone before. Gemini replaces Google Assistant as the default assistant on your phone, although you can disable this which you may want to do, especially if you rely on Google Assistant for your smart home. Gemini can be activated most commonly with a swipe from the bottom corner of the screen, although its also available via the Hey Google hot word.Similarly, Apple Intelligence is baked into the revamped Siri which can be activated using the Hey Siri codeword or by double pressing the power button. When you activate the new Siri, youll get a rainbow-style lighting effect around the edge of the entire display, instead of Siri taking over the entire display as it did in the previous generation.Andy Boxall / Digital TrendsBoth are simple to activate and use, so this ones a tie. I do think that Gemini is simpler to use and activate, especially as there are multiple ways to activate it, but conversely, the activation method can change between different Android phones.Using Gemini AI on the Google Pixel 9. Andy Boxall / Digital TrendsBoth platforms focus on using AI for three specific purposes: generative features such as creating and editing images or text as well as productivity features and a voice assistant. The former is the key focus for most AI makers but Ive often found that generative AI features can be somewhat of a gimmick. Yes, theyll create great memes, but they likely wont change your life.Both platforms allow you to edit images youve already captured to remove unwanted objects. Google has had this built into Magic Editor in Google Photos for several years, while iOS 18 natively brings this feature to the iPhone for the first time in the redesigned Apple Photos app.Take this image a friend took of me the morning after an intense night out. I asked both phones to remove the menu on the table and the results are fairly interesting.1. Original2. Apple Intelligence3. Google GeminiFirst, its immediately obvious that Apple Intelligence isnt as good as Gemini, as you can see the wood grain in the tabletop is angled in the same direction as the menu. That aside, Apple Intelligence does a great job at filling in the grains and ensuring that theres continuity in the before and after of the same photo.What about Google Gemini? Heres where Googles longer history comes into effect: its better by a considerable amount. First, it generates four different images for you to choose from. Second, it has more precision in allowing you to refine your selection before making an edit. However, at the same time, it takes more taps to get to the Magic Editor, and unless theyve used Google Photos in the past, an average person will probably find Apple Photos more intuitive.An iPhone showing an Apple Intelligence rendering of Steve Jobs in the Image Playground app. Jesse Hollington / Digital TrendsGoogles focus with Gemini is mostly on generative features, as well as making it a replacement for Google Assistant. It achieves the former extremely well, whilst it still needs some work as a true replacement for Google Assistant, especially if you need it for smart home controls.Gemini comes with a range of features that I enjoy using, especially Circle to Search, which debuted last year on the Galaxy S24 series and makes it effortless to search related to something on your display. Want to know where to buy shoes that you just saw on Instagram? Circle to search can look that up in seconds.Circle to Search with Save to Screenshots option on Google Pixel 9 Pro. Christine Romero-Chan / Digital TrendsMeanwhile, Apple Intelligence takes a different approach. It features many of the same generative features except for a true Circle to Search replacement but it is also made to be your assistant. Whichever app youre using, Apple Intelligence can edit, rewrite, or summarize text for you, which makes it particularly poignant when you use a variety of apps.There is also one other key difference between them: the models that they use.An iPhone prompting the user for ChatGPT approval. AppleIf you used Siri before the rollout of Apple Intelligence, youll know that it was not as good as Google Assistant; it wasnt even close. With that in mind, it almost always felt inevitable that Apple would turn to another provider for the underlying models that power Apple Intelligence.Google already pays Apple to be the default search engine on the iPhone something to the tune of almost $20 billion per year so its somewhat surprising that Apple turned to ChatGPT to provide the underlying models for Apple Intelligence.Using Gemini AI on the Google Pixel 9. Andy Boxall / Digital TrendsThis integration goes much further, and where the new Siri is unable to help, Apple has integrated ChatGPT as the default backup. This means there are some duplicate features you can generate images using Image Playground or ChatGPT and the same applies to some of the Writing Tools but it also means you have a vast array of information and data to work from. If you have a ChatGPT free or paid account, you can access even more features directly within Apple Intelligence.Comparatively, Google opts for a self-contained approach. The underlying model behind Gemini is Gemini Advanced, with Gemini 1.5 Pro in particular the current non-beta model. If you access Gemini via the web, you can also select the next-generation Gemini 2.0 model.Visual Intelligence on iPhones relies on the camera to make sense of the world around you. Christine Romero-Chan / Digital TrendsOne of the key differences between these two models is that Gemini 1.5 has a larger context window, while ChatGPT tends to be better at generating human-like text. Both apps allow you to build custom chatbots, but ChatGPT also offers more advanced features, and Plus or Enterprise users can create unlimited chatbots.One somewhat irritating thing about Apple Intelligence is that it doesnt use the latest GPT-4 model, which is far more advanced and capable. Its unclear whether Apple will roll this out at a later date, or itll be built into the next version of Apple Intelligence, but this is something Id like to see Apple integrate into Apple Intelligence. GPT-4 has a much fresher set of data to work with, which poses a challenge for the knowledge base that Apple Intelligence is working with.For example, I asked Apple Intelligence and Gemini who won the U.S. elections, and it generated an answer related to the 2020 election. After clarifying I meant the 2024 election, it gave me Google Search results. In this case, it was actually better than Gemini which wont discuss elections but this is an edge case and Gemini is regularly more accurate at recalling information than Apple Intelligence.Notification Summaries in Apple Intelligence on the iPhone 16 series Nirave Gondhia / Digital TrendsFor everything that Gemini is great for, theres one feature that Apple Intelligence gets right. In deciding how to make a useful AI, Apple focused on its ability to affect and improve your personal life, and Apple Intelligence is far better than Gemini at this.Ive already written that Notification Summaries are my favorite use of AI right now, but Apple Intelligence extends beyond that. Being able to call up Writing Tools to compose, refine, or edit text in any app is far better than Gemini which acts like an overlay to that app. Similarly, youll soon be able to recall information from any app, which should make the new Siri a far better personal assistant.iPhone 16 Pro (left) and Pixel 9 Pro Nirave Gondhia / Digital TrendsIve been using both platforms for months and answering this question is harder than I first expected. On the one hand, Google Gemini is a far better generative AI solution and has access to a much broader knowledge base than Apple Intelligence. On the other hand, Apple Intelligence is a much better personal assistant and has better integration with Apple devices.Then theres the long-term potential for each of these platforms. Google Gemini is the default AI provider underpinning the AI suite on most Android devices, while Apple Intelligence is focused solely on Apple devices but benefits from improvements made by ChatGPT (at least once its running the latest models).All things considered, Ive found that while Gemini is far more advanced than Apple Intelligence, its the latters focus on personal features that ensures I use it more often. When I want to search for something or edit a photo, I turn to Gemini, but for daily use, I find Apple Intelligence and in particular the Notification Summaries to be far more beneficial for daily life. That said, Gemini is undoubtedly the better AI platform, at least for now.Editors Recommendations
    0 Comments ·0 Shares ·47 Views