• تجربة متصفح Comet | بداية عصر جديد للإنترنت!

    The post
    تجربة متصفح Comet | بداية عصر جديد للإنترنت!
    appeared first on عرب هاردوير.
    تجربة متصفح Comet | بداية عصر جديد للإنترنت! The post تجربة متصفح Comet | بداية عصر جديد للإنترنت! appeared first on عرب هاردوير.
    تجربة متصفح Comet | بداية عصر جديد للإنترنت!
    arabhardware.net
    The post تجربة متصفح Comet | بداية عصر جديد للإنترنت! appeared first on عرب هاردوير.
    Like
    Love
    Wow
    Sad
    Angry
    155
    · 2 Comments ·0 Shares
  • The Latest Projects from Cornell’s ECE 4760/5730

    ECE 4760/5730 is the Digital Systems Design Using Microcontrollers course at Cornell University taught by [Hunter Adams]. The list of projects for spring this year includes forty write-ups — if …read more
    The Latest Projects from Cornell’s ECE 4760/5730 ECE 4760/5730 is the Digital Systems Design Using Microcontrollers course at Cornell University taught by [Hunter Adams]. The list of projects for spring this year includes forty write-ups — if …read more
    The Latest Projects from Cornell’s ECE 4760/5730
    hackaday.com
    ECE 4760/5730 is the Digital Systems Design Using Microcontrollers course at Cornell University taught by [Hunter Adams]. The list of projects for spring this year includes forty write-ups — if …read more
    Like
    Love
    Wow
    Sad
    Angry
    130
    · 2 Comments ·0 Shares
  • 25 Years Later, Valkyrie Profile Remains One Of The Greatest Cult JRPG Classics
    www.gamespot.com
    Valkyrie Profile is celebrating its 25-year anniversary today, August 29, 2025. Below, we look back at why the RPG went overlooked at the time, and what makes it still stand out as special today.It's not hard to see why Valkyrie Profile wound up overlooked in its initial North American release. Japanese publisher Enix had only recently revived its US branch, likely due to both the surge of popularity of JRPGs in the post-Final Fantasy 7 era and the success Sony had licensing and releasing Star Ocean: The Second Story globally. But it was going to be an uphill struggle to get a new PS1 RPG any traction, even one made by Star Ocean 2 developer Tri-Ace. The Dreamcast was already out, the US launch of the PS2 was less than two months away, and Squaresoft had dropped Chrono Cross, a follow-up to the beloved Chrono Trigger, a couple of weeks earlier.It was my freshman year of college, the weekend just before my birthday, and I wanted a new RPG to keep me occupied between classes. I had two choices: Valkyrie Profile or Chrono Cross. After having a delightful experience with Star Ocean 2 the summer before, my gut was telling me that Valkyrie Profile was the one I should pick.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    167
    · 0 Comments ·0 Shares
  • Unanswered Questions In Gears of War: Reloaded That Could Be Answered In Gears of War: E-Day
    gamerant.com
    Gears of Warwas one of the first new IPs on the Xbox 360 to really catch on. It arrived at the perfect time, just before Halo 3, and for some, they enjoyed this third-person shooter a lot more. Between the brutal action, co-op aspects, and cover mechanics, it was a whole new breed of exclusive shooter for the Xbox family. Gears of War remained a staple for Microsoft for a good decade, until the series started to cool down around 2020.
    Like
    Love
    Wow
    Sad
    Angry
    67
    · 0 Comments ·0 Shares
  • Loving Alien: Earth? Check out the best Alien game of the decade
    www.polygon.com
    If, like myself, you have been hopelessly sucked into Alien: Earth, Noah Hawley's new FX series, you might already be feeling that nagging question in the back of your mind: "The season is already halfway over, what am I going to watch to fill the empty, chestburster-shaped hole in my heart once Season 1 is finished?"
    Like
    Wow
    Love
    Angry
    Sad
    54
    · 0 Comments ·0 Shares
  • Unmasking The Magic: The Wizard Of Oz Method For UX Research
    smashingmagazine.com
    New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the most ingenious ideas risk failure without a fundamental grasp of user interaction with these new experiences. Consider the plight of the Nintendo Power Glove. Despite being a commercial success (selling over 1 million units), its release in late 1989 was followed by its discontinuation less than a full year later in 1990. The two games created solely for the Power Glove sold poorly, and there was little use for the Glove with Nintendos already popular traditional console games.A large part of the failure was due to audience reaction once the product (which allegedly was developed in 8 weeks) was cumbersome and unintuitive. Users found syncing the glove to the moves in specific games to be extremely frustrating, as it required a process of coding the moves into the gloves preset move buttons and then remembering which buttons would generate which move. With the more modern success of Nintendos WII and other movement-based controller consoles and games, we can see the Power Glove was a concept ahead of its time. If Power Gloves developers wanted to conduct effective research prior to building it out, they would have needed to look beyond traditional methods, such as surveys and interviews, to understand how a user might truly interact with the Glove. How could this have been done without a functional prototype and slowing down the overall development process?Enter the Wizard of Oz method, a potent tool for bridging the chasm between abstract concepts and tangible user understanding, as one potential option. This technique simulates a fully functional system, yet a human operator (the Wizard) discreetly orchestrates the experience. This allows researchers to gather authentic user reactions and insights without the prerequisite of a fully built product.The Wizard of Oz (WOZ) method is named in tribute to the similarly named book by Frank L. Baum. In the book, the Wizard is simply a man hidden behind a curtain, manipulating the reality of those who travel the land of Oz. Dorothy, the protagonist, exposes the Wizard for what he is, essentially an illusion or a con who is deceiving those who believe him to be omnipotent. Similarly, WOZ takes technologies that may or may not currently exist and emulates them in a way that should convince a research participant they are using an existing system or tool. WOZ enables the exploration of user needs, validation of nascent concepts, and mitigation of development risks, particularly with complex or emerging technologies. The product team in our above example might have used this method to have users simulate the actions of wearing the glove, programming moves into the glove, and playing games without needing a fully functional system. This could have uncovered the illogical situation of asking laypeople to code their hardware to be responsive to a game, show the frustration one encounters when needing to recode the device when changing out games, and also the cumbersome layout of the controls on the physical device (even if theyd used a cardboard glove with simulated controls drawn in crayon on the appropriate locations.Jeff Kelley credits himself (PDF) with coining the term WOZ method in 1980 to describe the research method he employed in his dissertation. However, Paula Roe credits Don Norman and Allan Munro for using the method as early as 1973 to conduct testing on an airport automated travel assistant. Regardless of who originated the method, both parties agree that it gained prominence when IBM later used it to conduct studies on a speech-to-text tool known as The Listening Typewriter (see Image below).In this article, Ill cover the core principles of the WOZ method, explore advanced applications taken from practical experience, and demonstrate its unique value through real-world examples, including its application to the field of agentic AI. UX practitioners can use the WOZ method as another tool to unlock user insights and craft human-centered products and experiences.The Yellow Brick Road: Core Principles And MechanicsThe WOZ method operates on the premise that users believe they are interacting with an autonomous system while a human wizard manages the systems responses behind the scenes. This individual, often positioned remotely (or off-screen), interprets user inputs and generates outputs that mimic the anticipated functionality of the experience.Cast Of CharactersA successful WOZ study involves several key roles:The UserThe participant who engages with what they perceive as the functional system.The FacilitatorThe researcher who guides the user through predefined tasks and observes their behavior and reactions.The WizardThe individual manipulates the systems behavior in real-time, providing responses to user inputs.The Observer (Optional)An additional researcher who observes the session without direct interaction, allowing for a secondary perspective on user behavior.Setting The Stage For Believability: Leaving Kansas BehindCreating a convincing illusion is key to the success of a WOZ study. This necessitates careful planning of the research environment and the tasks users will undertake. Consider a study evaluating a new voice command system for smart home devices. The research setup might involve a physical mock-up of a smart speaker and predefined scenarios like Play my favorite music or Dim the living room lights. The wizard, listening remotely, would then trigger the appropriate responses (e.g., playing a song, verbally confirming the lights are dimmed).Or perhaps it is a screen-based experience testing a new AI-powered chatbot. You have users entering commands into a text box, with another member of the product team providing responses simultaneously using a tool like Figma/Figjam, Miro, Mural, or other cloud-based software that allows multiple users to collaborate simultaneously (the author has no affiliation with any of the mentioned products).The Art Of IllusionMaintaining the illusion of a genuine system requires the following:Timely and Natural ResponsesThe wizard must react to user inputs with minimal delay and in a manner consistent with expected system behavior. Hesitation or unnatural phrasing can break the illusion.Consistent System LogicResponses should adhere to a predefined logic. For instance, if a user asks for the weather in a specific city, the wizard should consistently provide accurate information.Handling the UnexpectedUsers will inevitably deviate from planned paths. The wizard must possess the adaptability to respond plausibly to unforeseen inputs while preserving the perceived functionality.Ethical ConsiderationsTransparency is crucial, even in a method that involves a degree of deception. Participants should always be debriefed after the session, with a clear explanation of the Wizard of Oz technique and the reasons for its use. Data privacy must be maintained as with any study, and participants should feel comfortable and respected throughout the process.Distinguishing The MethodThe WOZ method occupies a unique space within the UX research toolkit:Unlike usability testing, which evaluates existing interfaces, Wizard of Oz explores concepts before significant development.Distinct from A/B testing, which compares variations of a products design, WOZ assesses entirely new functionalities that might otherwise lack context if shown to users.Compared to traditional prototyping, which often involves static mockups, WOZ offers a dynamic and interactive experience, enabling observation of real-time user behavior with a simulated system.This method proves particularly valuable when exploring truly novel interactions or complex systems where building a fully functional prototype is premature or resource-intensive. It allows researchers to answer fundamental questions about user needs and expectations before committing significant development efforts.Lets move beyond the foundational aspects of the WOZ method and explore some more advanced techniques and critical considerations that can elevate its effectiveness.Time Savings: WOZ Versus Crude PrototypingIts a fair question to ask whether WOZ is truly a time-saver compared to even cruder prototyping methods like paper prototypes or static digital mockups.While paper prototypes are incredibly fast to create and test for basic flow and layout, they fundamentally lack dynamic responsiveness. Static mockups offer visual fidelity but cannot simulate complex interactions or personalized outputs.The true time-saving advantage of the WOZ emerges when testing novel, complex, or AI-driven concepts. It allows researchers to evaluate genuine user interactions and mental models in a seemingly live environment, collecting rich behavioral data that simpler prototypes cannot. This fidelity in simulating a dynamic experience, even with a human behind the curtain, often reveals critical usability or conceptual flaws far earlier and more comprehensively than purely static representations, ultimately preventing costly reworks down the development pipeline.Additional Techniques And ConsiderationsWhile the core principle of the WOZ method is straightforward, its true power lies in nuanced application and thoughtful execution. Seasoned practitioners may leverage several advanced techniques to extract richer insights and address more complex research questions.Iterative WizardryThe WOZ method isnt necessarily a one-off endeavor. Employing it in iterative cycles can yield significant benefits. Initial rounds might focus on broad concept validation and identifying fundamental user reactions. Subsequent iterations can then refine the simulated functionality based on previous findings.For instance, after an initial study reveals user confusion with a particular interaction flow, the simulation can be adjusted, and a follow-up study can assess the impact of those changes. This iterative approach allows for a more agile and user-centered exploration of complex experiences.Managing ComplexitySimulating complex systems can be difficult for one wizard. Breaking complex interactions into smaller, manageable steps is crucial. Consider researching a multi-step onboarding process for a new software application. Instead of one person trying to simulate the entire flow, different aspects could be handled sequentially or even by multiple team members coordinating their responses.Clear communication protocols and well-defined responsibilities are essential in such scenarios to maintain a seamless user experience.Measuring Success Beyond ObservationWhile qualitative observation is a cornerstone of the WOZ method, defining clear metrics can add a layer of rigor to the findings. These metrics should match research goals. For example, if the goal is to assess the intuitiveness of a new navigation pattern, you might track the number of times users express confusion or the time it takes them to complete specific tasks.Combining these quantitative measures with qualitative insights provides a more comprehensive understanding of the user experience.Integrating With Other MethodsThe WOZ method isnt an island. Its effectiveness can be amplified by integrating it with other research techniques. Preceding a WOZ study with user interviews can help establish a deeper understanding of user needs and mental models, informing the design of the simulated experience. Following a WOZ study, surveys can gather broader quantitative feedback on the concepts explored. For example, after observing users interact with a simulated AI-powered scheduling tool, a survey could gauge their overall trust and perceived usefulness of such a system.When Not To Use WOZWOZ, as with all methods, has limitations. A few examples of scenarios where other methods would likely yield more reliable findings would be:Detailed Usability TestingHumans acting as wizards cannot perfectly replicate the exact experience a user will encounter. WOZ is often best in the early stages, where prototypes are rough drafts, and your team is looking for guidance on a solution that is up for consideration. Testing on a more detailed wireframe or prototype would be preferable to WOZ when you have entered the detailed design phase.Evaluating extremely complex systems with unpredictable outputsIf the systems responses are extremely varied, require sophisticated real-time calculations that exceed human capacity, or are intended to be genuinely unpredictable, a human may struggle to simulate them convincingly and consistently. This can lead to fatigue, errors, or improvisations that dont reflect the intended system, thereby compromising the validity of the findings.Training And PreparednessThe wizards skill is critical to the methods success. Training the individual(s) who will be simulating the system is essential. This training should cover:Understanding the Research GoalsThe wizard needs to grasp what the research aims to uncover.Consistency in ResponsesMaintaining consistent behavior throughout the sessions is vital for user believability.Anticipating User ActionsWhile improvisation is sometimes necessary, the wizard should be prepared for common user paths and potential deviations.Remaining UnbiasedThe wizard must avoid leading users or injecting their own opinions into the simulation.Handling Unexpected InputsClear protocols for dealing with unforeseen user actions should be established. This might involve having a set of pre-prepared fallback responses or a mechanism for quickly consulting with the facilitator.All of this suggests the need for practice in advance of running the actual session. We shouldnt forget to have a number of dry runs in which we ask our colleagues or those who are willing to assist to not only participate but also think about possible responses that could stump the wizard or throw things off if the user might provide them during a live session.I suggest having a believable prepared error statement ready to go for when a user throws a curveball. A simple response from the wizard of Im sorry, I am unable to perform that task at this time might be enough to move the session forward while also capturing a potentially unexpected situation your team can address in the final product design.Was This All A Dream? The Art Of The DebriefThe debriefing session following the WOZ interaction is an additional opportunity to gather rich qualitative data. Beyond asking What did you think? effective debriefing involves sharing the purpose of the study and the fact that the experience was simulated.Researchers should then conduct psychological probing to understand the reasons behind user behavior and reactions. Asking open-ended questions like Why did you try that? or What were you expecting to happen when you clicked that button? can reveal valuable insights into user mental models and expectations.Exploring moments of confusion, frustration, or delight in detail can uncover key areas for design improvement. Think about the potential information the Power Gloves development team could have uncovered if theyd asked participants what the experience of programming the glove and trying to remember what theyd programmed into which set of keys had been.Case Studies: Real-World ApplicationsThe value of the WOZ method becomes apparent when examining its application in real-world research scenarios. Here is an in-depth review of one scenario and a quick summary of another study involving WOZ, where this technique proved invaluable in shaping user experiences.Unraveling Agentic AI: Understanding User Mental ModelsA significant challenge in the realm of emerging technologies lies in user comprehension. This was particularly evident when our team began exploring the potential of Agentic AI for enterprise HR software.Agentic AI refers to artificial intelligence systems that can autonomously pursue goals by making decisions, taking actions, and adapting to changing environments with minimal human intervention. Unlike generative AI that primarily responds to direct commands or generates content, Agentic AI is designed to understand user intent, independently plan and execute multi-step tasks, and learn from its interactions to improve performance over time. These systems often combine multiple AI models and can reason through complex problems. For designers, this signifies a shift towards creating experiences where AI acts more like a proactive collaborator or assistant, capable of anticipating needs and taking the initiative to help users achieve their objectives rather than solely relying on explicit user instructions for every step.Preliminary research, including surveys and initial interviews, suggested that many HR professionals, while intrigued by the concept of AI assistance, struggled to grasp the potential functionality and practical implications of truly agentic systems those capable of autonomous action and proactive decision-making. We saw they had no reference point for what agentic AI was, even after we attempted relevant analogies to current examples. Building a fully functional agentic AI prototype at this exploratory stage was impractical. The underlying algorithms and integrations were complex and time-consuming to develop. Moreover, we risked building a solution based on potentially flawed assumptions about user needs and understanding. The WOZ method offered a solution.SetupWe designed a scenario where HR employees interacted with what they believed was an intelligent AI assistant capable of autonomously handling certain tasks. The facilitator presented users with a web interface where they could request assistance with tasks like draft a personalized onboarding plan for a new marketing hire or identify employees who might benefit from proactive well-being resources based on recent activity.Behind the scenes, a designer acted as the wizard. Based on the users request and the (simulated) available data, the designer would craft a response that mimicked the output of an agentic AI. For the onboarding plan, this involved assembling pre-written templates and personalizing them with details provided by the user. For the well-being resource identification, the wizard would select a plausible list of employees based on the general indicators discussed in the scenario. Crucially, the facilitator encouraged users to interact naturally, asking follow-up questions and exploring the systems perceived capabilities. For instance, a user might ask, Can the system also schedule the initial team introductions? The wizard, guided by pre-defined rules and the overall research goals, would respond accordingly, perhaps with a Yes, I can automatically propose meeting times based on everyones calendars (again, simulated). As recommended, we debriefed participants following each session. We began with transparency, explaining the simulation and that we had another live human posting the responses to the queries based on what the participant was saying. Open-ended questions explored initial reactions and envisioned use. Task-specific probing, like Why did you expect that? revealed underlying assumptions. We specifically addressed trust and control (How much trust...? What level of control...?). To understand mental models, we asked how users thought the AI worked. We also solicited improvement suggestions (What features...?).By focusing on the why behind user actions and expectations, these debriefings provided rich qualitative data that directly informed subsequent design decisions, particularly around transparency, human oversight, and prioritizing specific, high-value use cases. We also had a research participant who understood agentic AI and could provide additional insight based on that understanding.Key InsightsThis WOZ study yielded several crucial insights into user mental models of agentic AI in an HR context:Overestimation of CapabilitiesSome users initially attributed near-magical abilities to the AI, expecting it to understand highly nuanced or ambiguous requests without explicit instruction. This highlighted the need for clear communication about the systems actual scope and limitations.Trust and ControlA significant theme revolved around trust and control. Users expressed both excitement about the potential time savings and anxiety about relinquishing control over important HR processes. This indicated a need for design solutions that offered transparency into the AIs decision-making and allowed for human oversight.Value in Proactive AssistanceUsers reacted positively to the AI proactively identifying potential issues (like burnout risk), but they emphasized the importance of the AI providing clear reasoning and allowing human HR professionals to review and approve any suggested actions.Need for Tangible ExamplesAbstract explanations of agentic AI were insufficient. Users gained a much clearer understanding through these simulated interactions with concrete tasks and outcomes.Resulting Design ChangesBased on these findings, we made several key design decisions:Emphasis on TransparencyThe user interface would need to clearly show the AIs reasoning and the data it used to make decisions.Human Oversight and ReviewBuilt-in approval workflows would be essential for critical actions, ensuring HR professionals retain control.Focus on Specific, High-Value Use CasesInstead of trying to build a general-purpose agent, we prioritized specific use cases where agentic capabilities offered clear and demonstrable benefits.Educational OnboardingThe product onboarding would include clear, tangible examples of the AIs capabilities in action.Exploring Voice Interaction for In-Car SystemsIn another project, we used the WOZ method to evaluate user interaction with a voice interface for controlling in-car functions. Our research question focused on the naturalness and efficiency of voice commands for tasks like adjusting climate control, navigating to points of interest, and managing media playback.We set up a car cabin simulator with a microphone and speakers. The wizard, located in an adjacent room, listened to the users voice commands and triggered the corresponding actions (simulated through visual changes on a display and audio feedback). This allowed us to identify ambiguous commands, areas of user frustration with voice recognition (even though it was human-powered), and preferences for different phrasing and interaction styles before investing in complex speech recognition technology.These examples illustrate the versatility and power of the method in addressing a wide range of UX research questions across diverse product types and technological complexities. By simulating functionality, we can gain invaluable insights into user behavior and expectations early in the design process, leading to more user-centered and ultimately more successful products.The Future of Wizardry: Adapting To Emerging TechnologiesThe WOZ method, far from being a relic of simpler technological times, retains relevance as we navigate increasingly sophisticated and often opaque emerging technologies.The WOZ methods core strength, the ability to simulate complex functionality with human ingenuity, makes it uniquely suited for exploring user interactions with systems that are still in their nascent stages.WOZ In The Age Of AIConsider the burgeoning field of AI-powered experiences. Researching user interaction with generative AI, for instance, can be effectively done through WOZ. A wizard could curate and present AI-generated content (text, images, code) in response to user prompts, allowing researchers to assess user perceptions of quality, relevance, and trust without needing a fully trained and integrated AI model.Similarly, for personalized recommendation systems, a human could simulate the recommendations based on a users stated preferences and observed behavior, gathering valuable feedback on the perceived accuracy and helpfulness of such suggestions before algorithmic development.Even autonomous systems, seemingly the antithesis of human control, can benefit from WOZ studies. By simulating the autonomous behavior in specific scenarios, researchers can explore user comfort levels, identify needs for explainability, and understand how users might want to interact with or override such systems.Virtual And Augmented RealityImmersive environments like virtual and augmented reality present new frontiers for user experience research. WOZ can be particularly powerful here.Imagine testing a novel gesture-based interaction in VR. A researcher tracking the users hand movements could trigger corresponding virtual events, allowing for rapid iteration on the intuitiveness and comfort of these interactions without the complexities of fully programmed VR controls. Similarly, in AR, a wizard could remotely trigger the appearance and behavior of virtual objects overlaid onto the real world, gathering user feedback on their placement, relevance, and integration with the physical environment.The Human Factor Remains CentralDespite the rapid advancements in artificial intelligence and immersive technologies, the fundamental principles of human-centered design remain as relevant as ever. Technology should serve human needs and enhance human capabilities.The WOZ method inherently focuses on understanding user reactions and behaviors and acts as a crucial anchor in ensuring that technological progress aligns with human values and expectations.It allows us to inject the human factor into the design process of even the most advanced technologies. Doing this may help ensure these innovations are not only technically feasible but also truly usable, desirable, and beneficial.ConclusionThe WOZ method stands as a powerful and versatile tool in the UX researchers toolkit. The WOZ methods ability to bypass limitations of early-stage development and directly elicit user feedback on conceptual experiences offers invaluable advantages. Weve explored its core mechanics and covered ways of maximizing its impact. Weve also examined its practical application through real-world case studies, including its crucial role in understanding user interaction with nascent technologies like agentic AI.The strategic implementation of the WOZ method provides a potent means of de-risking product development. By validating assumptions, uncovering unexpected user behaviors, and identifying potential usability challenges early on, teams can avoid costly rework and build products that truly resonate with their intended audience.I encourage all UX practitioners, digital product managers, and those who collaborate with research teams to consider incorporating the WOZ method into their research toolkit. Experiment with its application in diverse scenarios, adapt its techniques to your specific needs and dont be afraid to have fun with it. Scarecrow costume optional.
    Like
    Love
    Wow
    Angry
    52
    · 0 Comments ·0 Shares
  • The Pixel Watch 3 Is $100 Off for Labor Day
    lifehacker.com
    We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.Labor Day sales are rolling in, and Lifehacker is sharing the best sales based on product reviews, comparisons, and price-tracking tools before theyre over. You can also subscribe to our shopping newsletter, Add to Cart, for the best sales sent to your inbox.I use the Pixel Watch 3 every day, and it's one of the best smartwatches I've owned. Naturally, part of me is eyeing the new Pixel Watch 4 Google just announced, but I think last year's model is still plenty good for me. And now that it's on sale for Labor Daycurrently $250, down from its usual $350it can be good for you, too.The Pixel Watch 3's battery life lasts all day (sometimes more than one), which has been my major gripe with past smartwatches I've owned. The heart-rate tracking is solid, and you can keep an eye on your current heart rate at a glance from the clock display. Google Pixel Watch 3 (41mm, Wi-Fi, Matte Black, Obsidian Band) $243.11 at Amazon $309.99 Save $66.88 Get Deal Get Deal $243.11 at Amazon $309.99 Save $66.88 Since Gemini support finally rolled out to the Pixel Watch, it's also gotten much smarter as a voice assistant. It used to be that I had to carefully issue commands using the exact right phrasing, but now I'm able to reply to messages, set reminders, or look up basic info from my watch without it getting confused as easily. It's still no replacement for a smartphone, but it's a lot more convenient than it used to be.At $250, the Pixel Watch 3 is at its lowest price ever, according to price-trackers. While it's hard to say if it's likely to get much cheaper when the Pixel Watch 4 comes out on Oct. 9, it's definitely cheaper than the new one will be for a while. So if you're not swayed by the domed display or side-mounted charger of the new model, this one will do most of what that one does for a hefty chunk of change less. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods Pro 2 Noise Cancelling Wireless Earbuds $199.00 (List Price $249.00) Google Pixel 10 128GB Unlocked Phone With $100 Amazon Gift Card (Obsidian) $799.00 (List Price $899.00) Samsung Galaxy S25 Edge 256GB Unlocked AI Phone (Titanium JetBlack) $849.99 (List Price $1,099.99) Blink Video Doorbell Wireless (Newest Model) + Sync Module Core $39.99 (List Price $69.99) Amazon Fire TV Stick 4K (2nd Gen, 2023) $29.99 (List Price $49.99) Apple iPad 11" 128GB A16 WiFi Tablet (Blue, 2025) $299.00 (List Price $349.00) Deals are selected by our commerce team
    Like
    Love
    Wow
    Sad
    Angry
    107
    · 0 Comments ·0 Shares
  • The US government drops its CHIPS Act requirements for Intel
    www.engadget.com
    Intel no longer has to fulfill certain requirements or meet milestones that it was originally supposed to under the CHIPS Act, now that the government is taking a stake in the company. According to the Wall Street Journal, Intel said in a filing that it can now receive funding from the government, as long as it can show that it has already spent $7.9 billion on projects that it agreed to take on under a deal with the Commerce Department last year. Reuters notes that Intel has already spent $7.87 billion on eligible CHIPS Act-funded projects.In addition, the company doesn't have to share a percentage of the total cumulative cash flow it gets from each project with the Commerce Department anymore. It doesn't have to adhere to some of the CHIPS Act's workflow policy requirements and most other restrictions, as well. However, it still can't use the funds it gets from the government for dividends and to repurchase shares.If you'll recall, the government recently decided to take a 10 percent stake in Intel instead of proceeding with their original CHIPS Act deal. President Donald Trump previously called for Intel CEO Lip-Bu Tan to resign, prompting a meeting between them that led to the new agreement. "He walked in wanting to keep his job and he ended up giving us 10 billion dollars for the United States," Trump said. "So we picked up 10 billion." Intel eventually announced that the US government will "make an $8.9 billion investment in Intel common stock." The purchase will be made up of the $5.7 billion previously earmarked for Intel as part of the CHIPS act, while the rest ($3.2 billion) will be awarded as part of the Secure Enclave program.Intel CEO David Zinser recently revealed that the company already received $5.7 billion from the government on Wednesday night. The government also previously awarded Intel $2.2 billion in grants under the CHIPS Act, bringing the government's total involvement with the company to $11.1 billion.This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-us-government-drops-its-chips-act-requirements-for-intel-133049932.html?src=rss
    Like
    Love
    Wow
    Angry
    Sad
    152
    · 0 Comments ·0 Shares
  • It looks like one Samsung Galaxy S26 Edge battery upgrade we were hoping for isn't happening
    www.techradar.com
    It would seem that the Galaxy S26 Edge is going to come with a bigger battery, while sticking with lithium-ion.
    Like
    Love
    Wow
    Sad
    Angry
    56
    · 0 Comments ·0 Shares
  • India's Reliance ties up with Google and Meta to drive AI push
    www.cnbc.com
    Indian conglomerate Reliance Industries announced new partnerships with Google and Meta to accelerate the company's push into artificial intelligence.
    Like
    Love
    Wow
    Angry
    Sad
    68
    · 0 Comments ·0 Shares
CGShares https://cgshares.com