• WWW.GADGETS360.COM
    Apple Starts Testing iPad Mini With OLED Screen Expected to Launch in 2026, Tipster Claims
    Photo Credit: Apple Apple's seventh generation iPad Mini was launched in October 2024 HighlightsApple is said to be testing an OLED screen for a small tabletThe company is expected to upgrade the iPad Mini with an OLED screenApple launched the iPad Pro (2024) with a Tandem OLED screen last yearAdvertisementApple is testing a new iPad Mini model equipped with an OLED screen, according to details shared by a tipster. The new 8-inch panel is said to be manufactured by Samsung Display, and the South Korean firm could begin production in H2 2025. Apple is expected to unveil the successor to last year's iPad Mini (7th Generation) in 2026, according to analysts. The company's iPad Pro (2024) was its first tablet model to be equipped with an OLED screen.Apple's iPad Mini Could Sport an OLED Screen Produced by SamsungIn a post on Weibo, the Chinese microblogging website, Digital Chat Station (translated from Chinese) claims that Apple is evaluating a small OLED screen for the iPad. The smallest tablet in the company's lineup is the iPad Mini, and this indicates that Apple is planning to replace the Liquid Retina LCD screen on the iPad Mini (7th Generation) with an OLED screen.The tipster also says that they do now know whether Apple's next iPad model will feature an OLED screen with a high refresh rate. The LCD screen on the iPad Mini (7th Generation) refreshes at 60Hz, while the more advanced OLED panels used on the iPad Pro (2024) have a 120Hz refresh rate.Digital Chat Station states that Apple is currently evaluating the OLED panel produced by Samsung, and production could begin in the second half of 2025. The company could launch an upgraded iPad Mini with the new OLED screen in 2026.While the next-gen iPad Mini is expected to arrive with an OLED screen, it will not be as advanced as the one on the iPad Pro, which sports a Tandem OLED screen that delivers increased brightness and improved colour reproduction, while reducing power consumption.According to previous reports, Apple is also working on an upgraded iPad Air with an OLED screen that could launch "as early as 2026". At the time, it was claimed that Apple would be equipped with a less advanced OLED panel to keep costs low.Last year, technology research firm Omdia predicted that Apple's rumoured decision to equip its iPad Air and iPad Mini models with OLED screens would also convince rivals to switch from LCD panels. The demand for these panels could cross the 30-million-unit mark by 2029, according to the firm.Affiliate links may be automatically generated - see our ethics statement for details.KEY SPECSNEWSDisplay 8.30-inchProcessor A17 ProFront Camera 12-megapixel + NoResolution 1448x2266 pixelsOS iPadOS 18Storage 128GBRear Camera 12-megapixel + NoMore Apple Tablets For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: iPad Mini, iPad Mini OLED, iPad Mini 2024, iPad, Samsung Display, Apple David Delima As a writer on technology with Gadgets 360, David Delima is interested in open-source technology, cybersecurity, consumer privacy, and loves to read and write about how the Internet works. David can be contacted via email at DavidD@ndtv.com,on Twitter at @DxDavey, and Mastodon at mstdn.social/@delima. More Related Stories
    0 Commenti 0 condivisioni 121 Views
  • MEDIUM.COM
    Fastest Ways To Make $10,000+ Per Month In 2025 Ai, Crypto & Automation Secrets!
    FASTEST WAYS TO MAKE $10,000+ PER MONTH IN 2025 AI, CRYPTO & AUTOMATION SECRETS! The Game Has ChangedAre You Ready to Print Money in 2025?Welcome to the Golden Era of Online Wealthwhere AI works, crypto grows, and you get rich while sleeping. If youre still grinding the 9-5 hustle, youre LAGGING behind. 2025 is the year of automation, passive income, and digital dominationand youre either in, or youre out. Want to start making money IMMEDIATELY? Join the #1 Money-Making Community Now! CLICK HERE TO START PRINTING CASH! Lets get into the 5 fastest, most profitable, and 100% automated ways to stack $10K+ per month in 2025.--- 1. AI-Powered Affiliate MarketingThe Set-and-Forget Cash Machine SEO Keywords: best affiliate programs 2025, AI affiliate marketing, passive income with AIWhat if you could earn $500+ commissions per salewithout selling, cold calling, or even showing your face? Thats AI-powered affiliate marketing in 2025. Step 1: Sign up for high-ticket affiliate programs (SaaS, AI tools, Web3 platforms). Step 2: Let AI tools (ChatGPT, Claude, Jasper AI) write blogs, create YouTube scripts & automate your marketing. Step 3: Set up AI-powered email funnels & chatbots that sell 24/7 for you. INSIDER SECRET: Use Quora, Medium, & AI-generated YouTube Shorts to drive FREE traffic to your links! JOIN THE #1 AFFILIATE CASH MACHINE NOW! CLICK HERE TO START! --- 2. AI Website FlippingTurn $100 into $10,000 FAST! SEO Keywords: AI website flipping, best Flippa businesses 2025, profitable website nichesWhy buy and sell houses when you can flip AI-generated websites for 10X profitsin DAYS, not months? Step 1: Use 10Web, Framer AI, or Wix AI to create fully automated, high-value websites in hot niches (finance, AI, crypto, SaaS). Step 2: Fill them with SEO-optimized, AI-generated content to attract buyers. Step 3: Sell them on Flippa, Empire Flippers, or Motion Invest for thousands in pure profit. Limited-Time Opportunity! Learn How to Flip Websites for $10K+ Profits! CLICK HERE TO JOIN! --- 3. AI YouTube Automation$10K+ Per Month with No Face, No Effort! SEO Keywords: faceless YouTube automation 2025, AI YouTube script generator, high CPM YouTube nichesWhat if AI could run a YouTube channel FOR YOU, and you collect ad revenue, sponsorships, and affiliate commissions? In 2025, its not only possibleits EASY. Step 1: Use ElevenLabs, HeyGen, and Pictory AI to create high-quality, AI-generated faceless videos. Step 2: Target high-CPM niches (Forex, Crypto, AI, Luxury, SaaS). Step 3: Monetize with ads, affiliate links, digital products, and sponsors. Pro Tip: Channels in the finance & crypto space make $30-$100 per 1,000 viewsway higher than entertainment or vlogs! WANT TO START? JOIN NOW & BUILD YOUR AUTOMATED YOUTUBE EMPIRE! --- 4. AI-Powered DropshippingMake Bank Without Touching Inventory! SEO Keywords: AI dropshipping 2025, fastest Shopify store setup, winning products AIYou dont need warehouses, shipping, or headaches to make money in e-commerce anymore. AI automates EVERYTHING in 2025. Step 1: Find trending products using Minea, Niche Scraper, or AutoDS. Step 2: Use Gemini AI or Jasper AI to generate viral ad copy. Step 3: Scale fast with AI-optimized TikTok ads & Instagram reels. Build an AI-powered store today! CLICK TO START EARNING! --- 5. Crypto & DeFi Passive IncomeLet Your Money Work for You! SEO Keywords: best crypto to invest 2025, AI crypto trading bots, DeFi passive incomeForget gambling on meme coinsreal money is in AI crypto trading & DeFi staking. Heres how to get steady, passive crypto income in 2025. Step 1: Use AI trading bots like Pionex, Bitsgap, or 3Commas for auto-trading. Step 2: Stake ICP, Solana, or ETH on Lido, Aave, or ICP DeFi protocols for high APY. Step 3: Earn daily rewards and stack passive crypto income. JOIN THE CRYPTO MONEY MACHINE NOW! CLICK HERE TO START! --- FINAL THOUGHTS: You Can Make $10K+ Per MonthBut Only If You TAKE ACTION!2025 is not for the lazy. This is the year of AI-driven money, and those who move FAST will win BIG. Your 2025 Wealth Plan Starts NOW: AI Affiliate Marketing Earn commissions while AI does the selling. Website Flipping Build & flip digital assets for 5-figure profits. YouTube Automation Make passive income from faceless videos. AI Dropshipping Scale without ever touching inventory. Crypto & DeFi Generate 24/7 crypto passive income. DONT WAIT! Get Inside the #1 Online Money-Making Community Now! CLICK HERE TO START PRINTING CASH IN 2025! Which method excites you the most? Comment below & lets WIN together! ---
    0 Commenti 0 condivisioni 96 Views
  • MEDIUM.COM
    The Playground of AI: Exploring the Basics of Reinforcement Learning
    The Playground of AI: Exploring the Basics of Reinforcement Learning18 min readJust now--Photo generated from OpenAIs ChatGPT (Prompted March 30, 2025)The current trend in the field of Data Science revolves around Generative Artificial Intelligence (GenAI) particularly chatbots utilizing Large Language Models (LLM). Before that, it was about predicting class or score using different features with the use of classical and deep learning models. However, throughout this timeline, a subset of machine learning has always existed though not as widely recognized and continues to evolve and thrive. This field is Reinforcement Learning (RL) which is focused on training agents to make decisions by interacting with their environment to maximize cumulative rewards.Photo taken from https://www.devopsschool.com/Unlike supervised learning, where the objective is to learn from labeled examples, or unsupervised learning, which focuses on identifying patterns in data; RL involves an autonomous agent that learns by making decisions and adapting based on the outcomes of its actions, often without prior data and typically in a trial-and-error process. [1][2] Reinforcement Learning even has deep roots in various disciplines, including psychology, neuroscience, economics, and engineering. The plethora of perspectives and influences makes RL a dynamic and highly interdisciplinary field. [3] In this introduction to Reinforcement Learning, we will explore the foundation and mathematics behind the field, the main framework with a brief teaser on the different advancements, and of course, a showcase on how RL works in Python.FundamentalsPhoto taken from https://thedecisionlab.com/At the core of Reinforcement Learning lies the Markov Decision Process (MDP) which is a mathematical framework that models decision-making in environments filled with uncertainty. A MDP consists of a set of states representing different situations an agent can encounter, actions the agent can take, and a transition probability that dictates the likelihood of moving between states. Additionally, a reward function provides feedback to the agent, helping it learn which actions lead to favorable outcomes. A key aspect of MDPs is the discount factor, which determines how much future rewards influence the agents decisions favoring either short-term or long-term gains. Another important concept of MDP is independent of past actions and states, relying the prediction of the next state solely on the current state.Photo taken from https://people.stfx.ca/Another key concept in RL is the Multi-Armed Bandits (MAB) problem whichtrade-off between exploration and exploitation. In this framework, an agent repeatedly chooses from K possible actions (or arms) to maximize cumulative rewards over time, even though the reward distribution for each action is unknown. The agent must balance exploring new options to gather information and exploiting the best-known choice for immediate benefit. Unlike supervised learning, which provides direct feedback on correct decisions, RL uses evaluative feedback that only reflects the effectiveness of chosen actions.DesignPhoto taken from https://lilianweng.github.io/The vanilla framework of Reinforcement learning involves an Agent which is the actor or decision-maker operating within the Environment defined as the world or system that defines the rules in which the agent can operate. Think of it as a game where the player is the Agent and the Environment is the confines of the said game. The Agent is bounded by the rules and design of the game, cannot think outside the box and no cheat codes(!!!). As a player or Agent plays the game, it will perform an Action (a, A) from the set of possible moves to interact with the Environment. Performing an action will lead to a State (s, S), a specific condition or configuration of the Environment at a given time as perceived by the Agent. As we play the game, we usually aim for an objective in order to progress, this is called Reward (r, R). Defined as the feedback or result from the Environment based on the Agents action, it tells the Agent how good or bad the action was. These are the basic parts of a vanilla or basic RL framework.Now we go deeper into the framework of RL. The main goal of a RL problem is to know the optimal strategy which is the sequence of Actions and States the Agent must follow in order to maximize the Rewards. A sequence of Actions and States is called Policy (). Think of this as the strategy in order to achieve things like highest score, finishing the game, or even like trolling around and achieving nothing.A simple RL model can now work with this framework and let the Agent solve the Environment by trial and error method, simulating all the possible combinations there is in order to identify the Policy or Policies that will maximize the Rewards. However, depending on the complexity of the Environment, there can be too many combinations of Actions and States which can is computationally expensive and time consuming. This is why algorithms were included in the initial framework to solve this dilemma.First, in order to determine the sort of quality of a Policy, we need to have a quantitative measure of the expected return of the agent being in a certain state. This is called a Value Function and it is derived from the Bellman equation that expresses the value of a state (or state-action pair) in terms of the expected immediate reward plus the discounted value of the next state (or next state-action pair). In RL, the Value Function can be divided into two broad categories; State Value Function and Action Value Function.Equation for State Value FunctionThe State Value Function represents expected cumulative reward an agent can achieve starting from a specific State and following a given Policy. This is crucial in the evaluation of deterministic policies or when understanding the value of being in a particular state is required.Equation for Action Value FunctionOn the other hand, Action Value Function represents the expected cumulative reward an agent can achieve from a defined State by taking a specific Action and following a given policy, thereafter. It is mainly used to evaluate and compare the potential for different actions when they are taking place in the same state. They are crucial for the selection of actions, where the goal is to determine the most appropriate action for each situation. As action-value functions take into account the expected return of different actions, they are particularly useful in environments with stochastic policies.Equation for Optimal State Value Function and Action Value FunctionSolving an RL task involves identifying a Policy that maximizes long-term Rewards which follows the Bellman Optimality Equation above. It also indicates the probabilistic nature of RL in transitioning to a State with a certain Reward given the current State and chosen Action. This equation serves as the baseline in terms of developing the RL algorithms and models being used currently in the field.PlaybookThere are a lot of models and methods currently developed in the field of Reinforcement Learning. To make things brief, we will be going through the general classifications of the models to give an overview of how things are defined.Photo taken from https://www.sciencedirect.comOne classification of the Reinforcement Learning is Model-free Method vs Model-based Method. As the figure above indicates, Model-free Methods determine the optimal policy or directly without creating a model of the environment. In this framework, the Agent learns only from Observation, Actions, and Rewards it experiences in the Environment (experience-based learning). This makes it a straightforward and flexible approach, especially for complex Environments where understanding the systems dynamics is difficult or impractical. However, it often requires a massive amount of interactions with the Environment, making it computationally expensive and slower to learn.[4]On the other hand, Model-based Methods build a representation of how the environment behaves for planning and improving decision-making. The process involves explicitly learning or using a model of the environments dynamics instead of relying only on direct experience. This makes model-based Method significantly sample-efficient, as it allows for planning and strategic decision-making rather than pure trial and error. The challenge and downside is learning an accurate model of the environment. If the model is imperfect or inaccurate, the agent may make poor decisions based on incorrect predictions.Photo taken from https://github.com/Another classification is in terms of how a Policy is updated based on the interaction in the Environment. This time it can be Online, Off-policy or Offline Reinforcement Learning. First, Online RL is a dynamic learning approach where an agent continuously interacts with the environment, takes Actions, and updates its Policy based on real-time feedback. This method allows the Agent to adapt quickly to changes in the Environment, making it suitable for tasks where conditions are unpredictable. However, since learning happens through direct interaction, Online RL often requires a large number of trials, making it computationally expensive and inefficient for complex problems.Unlike Online RL, Off-policy RL does not rely solely on real-time interactions. Instead, it allows agents to learn from previously collected data, making training more sample-efficient. This approach enables the agent to improve its Policy using experiences generated by other Policies or past iterations. While Off-policy RL provides flexibility and efficiency, it also introduces challenges such as distribution mismatch, where the data used for training may not fully align with the Optimal Policy being learned.Offline RL, also known as Batch RL, takes learning a step further by training Policies exclusively from pre-collected datasets without any interaction with the Environment. This makes it highly valuable in situations where real-world data collection is costly, dangerous, or impractical such as healthcare, robotics, and autonomous driving. Since Offline RL lacks direct interaction with the environment, it faces difficulties in generalizing to new situations and avoiding biases in the dataset.Again, this is only a glimpse of the diverse models in the field of Reinforcement Learning. Extensive discussions are needed to understand the ins and outs of each algorithm. But for now, let us move on to showcasing and visualizing how RL works.SimulationNow that we know the basics of Reinforcement Learning, we can now proceed to applying and simulating a RL problem. The section below include an overview to the library being used, initializing an Environment, Simulating an Action, Training an Agent on two different Environments: CartPole and Atari Breakout.In this code walkthrough, the main libraries used are: gymnasium which provides an Application Programming Interface (API) standard for reinforcement learning with a diverse collection of reference Environments; and the other is Stable Baselines3 (SB3) which contains set of reliable implementations (i.e. algorithms and wrapper) of reinforcement learning algorithms in PyTorch. Other modules used are for navigating file directory, ensuring compatibility, and visualizing the results by rendering the video of the Environment simulations.# File Directoryimport globimport ioimport base64import osimport shutil# RLimport gymnasium as gymfrom stable_baselines3 import PPO #Algorithm, check docs for othersfrom stable_baselines3.common.vec_env import DummyVecEnv # Wrapper for the env# For Rendering Video in Colabfrom gymnasium.wrappers import RecordVideofrom IPython.display import HTMLfrom pyvirtualdisplay import Displayfrom IPython import display as ipythondisplayimport matplotlib.pyplot as plt# Compatibilityimport numpy as npnp.bool8 = np.bool_CartPole Levelenv_name = "CartPole-v1"environ = gym.make(env_name, render_mode="rgb_array")The first part is initializing the Environment of the RL problem. We can choose from different Environments from the gymnasium documentation and other third party created Environments. Note that each environment has different dependencies so check the documentations first. For the first simulation, we will select CartPole-v1 where the task is to balance a pole attached to a cart by moving the cart left or right. The goal is to balance the pole as long as possible with the threshold of the Environment being set to 500 Frames to ensure that an episode (a trial/run of the game) will not be too long.environ = gym.make(env_name, render_mode="rgb_array")env = RecordVideo(environ, video_folder="./video", disable_logger=True, video_length=1000)for episode in range(5): obs, info = env.reset() done = False score = 0 while not done: action = env.action_space.sample() # Generate random action obs, reward, terminated, truncated, info = env.step(action) # Proceed on the generated action score += reward done = terminated or truncated # Ensure loop ends properly print([action, obs, reward, terminated]) print(f'Action: {action}') print(f'State: {obs}') print(f'Reward: {reward}') print(f'Episode: {episode} Total Score: {score}\n')env.close()To visualize the Cartpole Environment, we can simulate an episode by choosing random Actions, dictated by env.action_space.sample(), and check what happens. If we look at the output of the code, we can see that the Action can either by 0 (which is the cart moving left) and 1 (move to the right). For the State, as described in the documentation it is an array of length 4 with the elements (in sequence) being cart position, cart velocity, pole angle, and pole angular velocity. The third part is the Reward which can be 1 if the pole is still balanced at a certain angle and 0 if the pole exceeds the threshold angle of +12 or -12 degrees. Lastly, at the end of each Episode, we tally how long the pole is balanced with each movement of the cart and get the total Reward.# Opening Video of Policy in Colab# Similar with Env.render()def show_video(path='video/*.mp4'): mp4list = glob.glob(path) if len(mp4list) > 0: mp4 = mp4list[0] video = io.open(mp4, 'r+b').read() encoded = base64.b64encode(video) ipythondisplay.display(HTML(data='''<video alt="test" autoplay loop controls style="height: 400px;"> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))) else: print("Could not find video")show_video()CartPole-v1 Episode with random ActionsAs we can see in the rendered video above, the pole was balanced for about 2 seconds before the game was terminated due to exceeding the threshold angle. Note that this is the result of doing random Actions per step which means the Policy is not optimal. With this, we can proceed to training the Agent by simulating multiple Episodes or trials for the Agent to know how to approach the Environment better than taking random Actions.#Wrap env into a dummy vectorized environment (for compatibility purposes)env = DummyVecEnv([lambda: env]) # Defining the agent (policy, environment, log path)model = PPO('MlpPolicy', env, tensorboard_log=log_path, verbose=1) model.learn(total_timesteps=20000) # Timesteps Depending on complexity of environmentTraining the model or the Agent required the Environment to be wrapped into a vectorized Environment which ensures compatibility with the code and can allow us to train multiple stacks of Environment per step to speed up the training process. Next is defining the algorithm for the Policy which for now is set to default Proximal Policy Optimization (PPO) with the main idea is that after an update, the new policy should be not too far from the old policy. For the MlpPolicy part, it is base Policy to be used on the which is dependent on the Environment. MlpPolicy is used with low-dimensional, vector observations. Next is we make the Agent learn the Environment by simulating multiple Policies and set a max number of timesteps to cap the learning process. Given that this is simple RL problem, 20,000 timesteps is almost enough for us to achieve high Rewards. Running the model.learn() outputs the state of the training, showing multiple metrics (losses, variance, deviation) on how training is performing.from stable_baselines3.common.evaluation import evaluate_policy #Testing/Validation# Random Agent, before trainingmean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=100)print('Trained Model')print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")model2 = PPO('MlpPolicy', env, tensorboard_log=log_path, verbose=1)mean_reward2, std_reward2 = evaluate_policy(model2, env, n_eval_episodes=100)print('Base Model')print(f"mean_reward:{mean_reward2:.2f} +/- {std_reward2:.2f}")Next part is evaluating how the Agent now performs after learning the Environment and simulating multiple Policies. As indicated above, the Agent now achieved the threshold Reward of 500 which means that it know the optimal Policy of the Environment. If we compare the trained model with the base model, we can the major improvement in the achieved Reward, going from average 3334 to 500 (with standard deviation of 0).folder_path = "/content/video_test/" # Change to your folder pathshutil.rmtree(folder_path)env_name = 'CartPole-v1'environ = gym.make(env_name, render_mode="rgb_array")env = RecordVideo(environ, video_folder="/content/video_test", disable_logger=True, video_length=1000)for episode in range(2): obs, info = env.reset() done = False score = 0 while not done: action, _ = model.predict(obs) obs, reward, terminated, truncated, info = env.step(action) # Proceed on the generated action score += reward done = terminated or truncated # Ensure loop ends properly # print([action, obs, reward, terminated]) print(f'Episode: {episode} Score: {score}\n')env.close()show_video('video_test/*.mp4')A CartPole-v1 Episode after Training with 20000 TimestepsVisualizing the results of the training, we can see in the above rendered video that the pole is balanced all throughout the simulation. The video here lasts around 10 seconds which is the maximum duration as set in the Environment (500 Frames). This is basically how a Reinforcement Learning framework is done: Initialize an Environment, Train the Agent with selected algorithm and choose for how long the training will be done, then check the results of the training.Breakout LevelNext, we proceed to a more difficult Environment which is Breakout, a famous Atari game. The dynamics of the Environment are similar to pong: moving a paddle to navigate the ball to the brick walls at the top of the screen. The goal is to destroy as much brick walls if not all before the ball hits the bottom section of the screen.from stable_baselines3 import A2Cfrom stable_baselines3.common.vec_env import VecFrameStackfrom stable_baselines3.common.env_util import make_atari_envimport ale_pygym.register_envs(ale_py)env_atari = gym.make('ALE/Breakout-v5', render_mode="rgb_array")env = RecordVideo(env_atari, video_folder="/content/atari", disable_logger=True, video_length=1000)for episode in range(5): obs, info = env.reset() done = False score = 0 while not done: action = env.action_space.sample() # Generate random action obs, reward, terminated, truncated, info = env.step(action) # Proceed on the generated action score += reward done = terminated or truncated # Ensure loop ends properly # print([action, obs, reward, terminated]) print(f'Action: {action}') print(f'State: {obs}') print(f'Reward: {reward}') print(f'Episode: {episode} Score: {score}\n')env.close()Again, we initialize the Environment (again check the documentation to ensure compatibility and installing necessary dependencies) and simulate an Episode using random Actions. For this game, there are four actions that can be done: 0 for no action, 1 to fire the ball (to start the game), 2 for right movement of the paddle, and 3 to move the paddle to the left. The state is an observation space of Box(0, 255, (210, 160, 3), np.uint8) which is the RGB and pixel values of the Environment. The Reward is if a brick is destroyed in the specific state. Again, the goal is destroy as much brick as possible before game over.show_video('atari/*.mp4')Breakout Episode with Random ActionsLooking at the results of the episode with random Actions, we can see that the agent does not follow the trajectory of the ball (as expected given Actions are random). It got lucky at the end and it moved the paddle to hit the ball twice, breaking two bricks and scoring two points. Again, we need to train the Agent and make it learn by simulating different Policies.env_atari = make_atari_env('ALE/Breakout-v5', n_envs=4, seed=0)env_atari_vec = VecFrameStack(env_atari, n_stack=4)# Reset environment to get initial framesobs = env_atari_vec.reset()# Capture a frame from each environmentframes = env_atari.get_images() # Returns a list of 4 frames (one per env)# Create a 2x2 grid to display the framesfig, axes = plt.subplots(2, 2, figsize=(10, 10))for i, ax in enumerate(axes.flat): ax.imshow(frames[i]) # Display the frame for each env ax.axis("off") ax.set_title(f"Env {i+1}")plt.tight_layout()plt.show()Different Breakout Instances to be Stacked During TrainingFor the training, we will be running four separate instances of the Environment at the same time. These instances run in parallel, speeding up training by processing multiple game states at once. One more thing to note is that for this Environment, trajectory of the ball is important so the paddle is moved correctly towards the direction of the ball as it goes down. A single frame is not enough to check where the ball is going which is why we need to stack frames (given by VecFrameStack).model_atari = A2C('CnnPolicy', env_atari_vec, verbose=1, tensorboard_log=log_path)model_atari.learn(total_timesteps=500000, log_interval=10000)For the Atari Breakout problem, we will be using A2C (Advantage Actor-Critic) which is an algorithm combines value-based and policy-based approaches. The CNNpolicy is required for image-based observations such as Breakout. Next is we train the Agent for 500,000 timesteps to simulate different Policies and updating it using the chosen algorithm. As indicated along with the different metrics of the training, it took around 30 minutes to complete 500,000 timesteps. This is considering we did parallelized the process by using four instances at the same time.mean_reward, std_reward = evaluate_policy(model_atari, env_atari_vec, n_eval_episodes=20)print('Trained Model')print(f"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}")folder_path = "/content/atari_test/" # Change to your folder pathshutil.rmtree(folder_path)# env_atari = gym.make('ALE/Breakout-v5', render_mode="rgb_array")env_atari = make_atari_env('ALE/Breakout-v5', n_envs=1, seed=13)env = VecFrameStack(env_atari, n_stack=4)env = VecVideoRecorder(env, video_folder="/content/atari_test", record_video_trigger=lambda x: x == 0, video_length=1000)for episode in range(5): obs = env.reset() done = False score = 0 while not done: action, _ = model_atari.predict(obs) obs, reward, terminated, info = env.step(action) # Proceed on the generated action score += reward done = terminated or truncated # Ensure loop ends properly # print([action, obs, reward, terminated]) print(f'Episode: {episode} Score: {score}\n')env.close()show_video('atari_test/*.mp4')A Breakout Episode after Training with 500,000 TimestepsThen looking at the results of the training, we achieved an average reward of 23 destroyed bricks. Looking at the sample simulation of trained model, we can see that the movement of the paddle is now slightly coordinated with the trajectory of the ball (although it failed badly after one successful life). With the maximum score of 432 for the Atari Breakout, imagine how many timesteps are needed train the model for the Agent reach the max score. This highlights the dilemma in RL being too computationally expensive, that it will take a very long time to train a simple Environment to know the optimal Policy.Next StepsIn the walkthrough above, we simulated a Reinforcement Learning problem and trained an Agent by trial and error process. There are multiple directions from this especially if we want to be achieve more reward, like for the Atari Breakout game since the agent only scored 23 on average after training. The most straightforward option is increasing the number of timesteps to millions to let the Agent experience more policies. This will take forever but will ensure better results. Other options, similar to traditional supervised learning, are hyperparameter tuning and algorithm selection. Each algorithm has its own strengths so it is important to check the papers of the high performing ones (if not all). And then each algorithm has its own parameters such as learning rate that can be tweaked for each Environment to improve the performance.Other explorations that can be done is instead of using model-free and on-policy methods; we can use model-based, off-policy or offline RL methods to check how it will fare for the specific Environment. There are a lot of branches of RL in terms of model and algorithms so best to know it all.Lastly, in my opinion, is the best way to understand RL and how to specialize in RL is to learn to create a custom Environment. Creating the specifics, the rules, the Agent, what Actions it can take, how to score the Rewards. Knowing how this works can let us be creative enough to apply RL in different situations.ConclusionReinforcement Learning is normally not as popular and widely used as our typical supervised and unsupervised machine learning world. However, it is a vast and quickly evolving field and similar with those two, provides insights that are new even for the domain experts. RL is already being applies in different fields such as Robotics which instead of investing a lot of money into the hardware to test, RL can do the simulations; Gaming where AI is used in different aspects such as game content, game testing, strategies, etc.; Autonomous Driving which enable vehicles to learn optimal behaviors to ensure safety and efficiency. As part of the education sector, it is noteworthy to highlight the application of RL in terms of creating customized curriculum in order to maximize the learnings and motivation of students in going through classes.Again, this is only part 1 of Reinforcement Learning series from someone who started with minimal knowledge of the field. This hopefully becomes a road to specializing the field of RL or at least a specific area of the field. This only the start of uncovering and knowing the ins and outs of the Playground of AI.Python notebook for the scripts provided: https://github.com/redvjames/RL_sandbox (Tested in Google Colab)Reference[1] Ghasemi, M., Moosavi, A. H., Sorkhoh, I., Agrawal, A., Alzhouri, F., & Ebrahimi, D. (2024). An introduction to reinforcement learning: Fundamental concepts and practical applications. arXiv preprint arXiv:2408.07712. https://doi.org/10.48550/arXiv.2408.07712[2] Naeem, M., Rizvi, S. T. H., & Coronato, A. (2020). A gentle introduction to reinforcement learning and its application in different fields. IEEE access, 8, 209320209344. https://doi.org/10.1109/ACCESS.2020.3038605[3] Ahilan, S. (2023). A Succinct Summary of Reinforcement Learning. arXiv preprint arXiv:2301.01379. https://doi.org/10.48550/arXiv.2301.01379[4] AlMahamid, F., & Grolinger, K. (2021, September). Reinforcement learning algorithms: An overview and classification. In 2021 IEEE canadian conference on electrical and computer engineering (CCECE) (pp. 17). IEEE. https://doi.org/10.1109/CCECE53047.2021.9569056
    0 Commenti 0 condivisioni 97 Views
  • WWW.RESETERA.COM
    Nintendo Switch 2 US Launch Lineup - June 5, 2025
    Darknight"I'd buy that for a dollar!"MemberOct 25, 201724,736* Will add more titles as they comeLast edited: Yesterday at 8:55 PM YuriLowellMemberOct 26, 20174,293In for Mario Kart World and probably Split Fiction.PatitolocoMemberOct 27, 201727,308To be fair, that's a damn strong release lineup, especially for Switch 1-only users that couldn't play a lot of these games before.LuminationMemberOct 26, 201715,412Love FF7R and BD, but very disappointed that SEs output is just a port and a remaster.Otherwise, that's honestly a decent lineup if you're generally Nintendo-only.I do wish there was anything even remotely close to BotW tier from Nintendo though.cursed knowledgeMemberMar 15, 20193,524Brazildamn, no sports games at launch? did this happen with Switch 1 as well?mysteriousmage09MemberOct 25, 20174,211Extremely weak if you've played the 3rd party games elsewhere.fluffydelusionsMemberOct 25, 201713,508Mario Kart is all I care about at launch.claudelabonteMemberOct 27, 2017238St-LambertI was looking exactly for this thanks!shinobi602VerifiedOct 24, 201710,917If you don't play on anything other than Nintendo, there's a lot to choose from.Otherwise...ehhBeansBeansBeansMemberJan 14, 2025392Darknight said:Mario Kart WorldNintendo Switch 2 Welcome TourHogwarts LegacyCivilization VIICyberpunk 2077 Ultimate EditionFortnite* Will add more titles as they comeClick to expand...Click to shrink...Not a bad day one for me! Faceless_CaesarMemberJul 1, 202010,250Darknight said:Arcade Archives 2 Ridge RacerClick to expand...Click to shrink...Nintendo doing what Sony couldn't and is launching with Ridge Racer.ButtonbasherMemberDec 4, 20175,553Strong lineup of games I've already completed, and Mario Kart which I feel priced out ofKodiakGTSMemberJun 4, 20181,264Guess I'll be playing a lot of Mario Kart at launch lolOP OPDarknight"I'd buy that for a dollar!"MemberOct 25, 201724,736cursed knowledge said:damn, no sports games at launch? did this happen with Switch 1 as well?Click to expand...Click to shrink...I don't recall any sports games at launch for the Switch 1. They'll probably be aligned with the next iteration. That means stuff like Madden, FC and NBA 2K will release in the fall.gogojiraMemberOct 27, 20173,154That's a bad launch lineup, I'm sorry. And I was incredibly hyped.I'm in for Mario Kart, but ugh.blownspeakersMemberMay 7, 2019165so there's just 3 exclusive games, right?MarcelloF"This guy are sick"MemberDec 9, 20209,419Besides Mario Kart, pretty weak launch.Zebesian-XMemberDec 3, 201824,751This is gonna be a Mario Kart + Xenoblade X Machine for meDJwestMemberOct 26, 20177,771mysteriousmage09 said:Extremely weak if you've played the 3rd party games elsewhere.Click to expand...Click to shrink...YepFaiyazMemberNov 30, 20176,343BangladeshDonkey Kong Bananza is one month after. The four-pack of Legends ZA, Prime 4, Mario Kart World and DK is going to carry quite hard. El_DabrahMemberFeb 18, 20242,025If you only had a Switch 1 and no other platform, this is a dope launch.Otherwise I'm looking at this and thinking "wow Mario kart is the only launch game".mrmickfranThe FallenOct 27, 201732,580GongagaJust Mario Kart and Bravely Default for meOofCalibroMemberOct 27, 20171,825BelarusCan't believe there's no actual big Nintendo game besides MKW on launch.Just tell me if Bayonetta 3 will run better on this shit...AudibleeMemberMar 14, 2025323Yuck. I'm glad I saved Mario Wonder for SW2.logashMemberOct 27, 20175,981As I expected, I am not interested in anything other than Mario Kart World. Luckily I anticipated this and purchase Echoes of Wisdom, Brothership, and Xenoblade with the intent of playing them on Switch 2. I will be stacked on Switch 2 launch no matter what.DustC H A O SMemberOct 25, 201739,934shinobi602 said:If you don't play on anything other than Nintendo, there's a lot to choose from.Otherwise...ehhClick to expand...Click to shrink...My thoughts. If you own any other HW this is most likely a Mario Kart machine at launch.DarkChronicMemberOct 27, 20175,553I'm likely in for Mario Kart World and Bravely Default, and then some of the backwards compatibility stuff from S1 that I'm looking forward to (namely Pokemon and hopefully Xenoblade). And then Donkey Kong is only a month later.MadLaughterAvengerOct 25, 201714,340I am pretty easy to please, and yet...there's not anything on there that I actually really want to play. I'm sure Mario Kart will be solid but..mavericktopgunMemberOct 27, 20175,086Bad lineup :/Expected new games or at least some Microsoft ports (Forza, Halo,..)FTFMemberOct 28, 201732,510New YorkKodiakGTS said:Guess I'll be playing a lot of Mario Kart at launch lolClick to expand...Click to shrink...lol my exact thought. MK World and GameCube games. KaruThe FallenOct 25, 20172,699Mario Kart + GameCube + Switch 2 Editions/Free Upgrades are a solid launch for me.BushidoSenior Game DesignerVerifiedFeb 6, 20181,899Fast Fusion getting 2 seconds in a montage had me very disappointed.WavvesMemberOct 27, 2017330So this is why Windwaker took foreverFalchionMemberOct 25, 201743,922Wow that's an absolutely terrible launch lineup if you have another platform. Really only one game I care about. Might not buy at launch after all.XavillinMemberOct 25, 20172,127Hogwarts Legacy is already on Switch 1. Are WB Games making people pay full price for the Switch 2 version wtfNetOpWibbyMemberNov 15, 201746Carina NebulaI remember the Switch 1 launch; the only game worth playing was Breath of the Wild. Thankfully, BotW was so good.The Switch 2 launch lineup is fantastic AND BotW gets a massive update. Pretty great IMO.HelixMayor of Clown TownMemberJun 8, 201926,592much like the Switch launch, you are buying this game day one for one game lolCapricorn"This guy are sick"AvengerOct 25, 2017870Loved Yakuza Kiwami on the Switch so I'm super stoked for Yakuza 0 Director's Cut.Not sure if I'll get the Switch 2 anymore, but out of those I'd likely only get Mario Kart, Yakuza and Bravely Default.cw_sasukeMemberOct 27, 201729,407Im fine with Mario Kart WorldBotW/TotK free NSO UpgradeGC NSOFast FusionJoeshabadooMemberJan 3, 20191,674Wait these are the SW 2 games, is there a list of the Switch 2 editions? Prime 4 SW2 edition is day 1 right? TripleBeeProphet of TruthMemberOct 30, 20177,290VancouverIf you've never played anything but switch 1 it's an alright lineup.Obviously if you have other platforms it's pretty bad.AngieBest Avatar Thread Ever!MemberNov 20, 201748,552Kingdom of CoronaMario Kart World onlyKebiinuMemberSep 15, 202348NYCMario Kart has me salivating at the mouth. Fortnite I'm especially excited for, because if the mouse controls!!!dobahking91MemberOct 26, 20172,824That's a no for me dawg.Very weak imo.linkboyMemberOct 26, 201715,530Renocursed knowledge said:damn, no sports games at launch? did this happen with Switch 1 as well?Click to expand...Click to shrink...The only sport that just started here in the US is baseball, and I doubt 2k had dev kits last year when they started on this years MLB 2k (plus, it'll almost be two months into the season when the Switch 2 releases).The NFL pre-season starts in August, and Madden will be releasing, so there's that.Last edited: Wednesday at 11:48 AMFreelance BrianMemberOct 25, 20172,023Bushido said:Fast Fusion getting 2 seconds in a montage had me very disappointed.Click to expand...Click to shrink...yup, Fast Racing RMX was the 2nd best launch game for Switch 1KirbivoreOne Winged SlayerMemberOct 25, 20175,309Good line up despite the availability elsewhere.game-bizMemberOct 25, 20175,642Insanely weak launch.RochHochOne Winged SlayerMemberMay 22, 201820,996Mario Kart alone makes for an amazing launch lineupWhen the heck is Prime 4 dropping thoRebelStrikeMemberApr 28, 2020890Objectively a pretty good launch lineup, but outside of Mario Kart and GameCube NSO (F-Zero and Soul Calibur II), I'm honestly not interested in playing any of the third party games :/
    0 Commenti 0 condivisioni 128 Views
  • WWW.RESETERA.COM
    Hamster announces Arcade Archives 2 for Nintendo Switch 2, starting with Ridge Racer
    Hero of LegendThe Hero of Thread TitlesMemberOct 26, 201714,795At 1:40. This is HUGE. Now Hamster's finally going for bigger 3D era arcade games. Gotta wonder if Sega may join in as well (but maybe will just do it via their own Sega Ages line).Assets from Nintendo's press site:https://www.mediafire.com/file/efd6dnpzs12mx5c/ArcadeArchives2RIDGERACER.zip/fileLast edited: Wednesday at 6:05 PM DestHas seen more 10s than EA ever willCowardJun 4, 201815,820Worki did want ridge racer on switch and i do guess i got itentremetYou wouldn't toast a NES cartridgeMemberOct 26, 201769,800Nice!Yuri PMemberOct 28, 2017311I'm so happy about this!! I hope they also bring Ridge Racer 2 and Rave Racer as well as other System 22 gamesfiendcodeMemberOct 26, 201725,988Might be my most hype announcement, lol. Lots of good potential 3D hardware based releases from Namco, Konami, Taito, Tecmo, Arika and SNK. Maybe even Nintendo (Cruis'n) if Hamster looks west.Type VIIMemberOct 31, 20172,863Absolutely massive news. 3D Arcade Archives is something I've been wanting for years, and the perfect game to launch it with too.yyrMemberNov 14, 20174,100White Plains, NYThere is a gold mine in early 3D arcade games that nobody has yet touched.I hate Hamster's low-effort approach to Arcade Archives, but I will freaking buy a ton of these if they tap that gold mine properly!Hero of Legend said:(but maybe will just do it via their own Sega Ages line).Click to expand...Click to shrink...I kind of thought Sega was done with Ages for now; after the lukewarm reception on Switch, they just stopped dead a few years back. It's a shame, as M2 always does a fantastic job.mute Legend MemberOct 25, 201728,812It is better than nothing but Ridge Racer deserves more. 3D arcade games are going to need an upres at a minimum.shadowman16MemberOct 25, 201740,399Excellent news, huge news for me even... As people have said, there's tons of 3D titles that either have not been emulated, emulated poorly etc. So this'll be day 1 for me (plus I adore RR)MulderYuffieMemberOct 25, 20171,277Carnevil pls Hell Egret#TeamThierryMemberAug 19, 2022681Huge news and def one of the coolest announcements of the dayAltima VIIMemberMar 2, 202579Not even joking, this was the moment of the show for me.Do we know if this is exclusive to Switch 2 or not? As much as I intend to buy a Switch 2, Steam is my "forever library" for old shit like this.evilromeroMemberOct 27, 20174,136I will definitely buy this. Eventually.modernkicksMemberApr 7, 2020380god bless hamsterRPG_FanaticMemberOct 25, 20174,641fiendcode said:Might be my most hype announcement, lol. Lots of good potential 3D hardware based releases from Namco, Konami, Taito, Tecmo, Arika and SNK. Maybe even Nintendo (Cruis'n) if Hamster looks west.Click to expand...Click to shrink...Cruis'n games would be awesome.shadowman16MemberOct 25, 201740,399fiendcode said:Might be my most hype announcement, lol. Lots of good potential 3D hardware based releases from Namco, Konami, Taito, Tecmo, Arika and SNK. Maybe even Nintendo (Cruis'n) if Hamster looks west.Click to expand...Click to shrink...Does that mean we could get Fighting Layer ported finally? Id be all over that...Pity that they cant/dont get stuff from Capcom though, there's a number of games from them Id love to see ported (though at least we'll proably get those in future collections)twinturbo2MemberOct 27, 20174,621Jupiter, FLRPG_Fanatic said:Cruis'n games would be awesome.Click to expand...Click to shrink...Same here, but if Sega had to change the look of the player car in rereleases of OutRun to avoid paying Ferrari, I wonder if that would affect any rereleases of the Cruis'n games, since Cruis'n USA has an unlicensed Testarossa and Cruis'n World has an unlicensed Ferrari 456. Not to mention the licensed cars in Cruis'n Exotica. I'd still buy them, though.OnanieBombMemberOct 25, 201711,177hell yesTailzoFallen GuardianMemberOct 27, 20178,892Please, Sega RallyYuri PMemberOct 28, 20173113 minute gameplay video. People can probably figure out the exact version of the game based on the billboards around the track, it's one of the global releases.View: https://youtu.be/l67ziwkd6KU?si=Wi5YfXaNQfC-DhLmArcade Archives 2 RIDGE RACER will released on same day as Nintendo Switch 2! | HAMSTER CorporationHAMSTER Corporation (Head Office: Tokyo-to, Setagaya-ku; CEO: Satoshi HAMADA; referred to below as "HAMSTER") is happy to announce that "Arcade Archives 2 RIDGE RACER," for Nintendo Switch 2 will be released on June 5, 2025, the same day as the release of Nintendo Switch 2!www.hamster.co.jpPlatform: Nintendo Switch 2Release date: June 5, 2025 (Same Day as Nintendo Switch 2)a DX version featuring support for an H-shifter and clutch is also included.Click to expand...Click to shrink...In addition to the "ORIGINAL MODE," "HIGH SCORE MODE," and "CARAVAN MODE" included in the "Arcade Archives" series, "Arcade Archives 2" series will add new features such as "TIME ATTACK MODE" and "NETWORK MODE."In "TIME ATTACK MODE" you compete to see who can beat the final boss the fastest stating from the first battle. This mode focuses on how quickly you can complete the game, regardless of the score you achieve. In "NETWORK MODE" you can play against other users over the network. However, "TIME ATTACK MODE" and "NETWORK MODE" may not be implemented in some games. Since "RIDGE RACER" is a single-player game, "NETWORK MODE" is not implemented.Additionally, functionality has been significantly enhanced. Multiple save slots have been implemented instead of just one, along with a rewind feature that allows players to retry gameplay and a quick start feature for those who want to dive straight into the game. Furthermore, VRR support has been added, enabling more accurate reproduction of the original arcade game's experience.Click to expand...Click to shrink... VespaMemberOct 26, 20171,966Awesome to see an Arcade RR port! we inch closer to Rave Racer...Last edited: Wednesday at 12:24 PMShift BreakerMemberOct 25, 20173,160What? What? Day one. I've played the hell out of the PS1 versions, absolutely need to give the arcade one a try.RedSwirlMemberOct 25, 201710,633Looking at the arcade board RR ran on, this potentially puts Time Crisis 1 on the table.ZorMemberOct 30, 201713,698OH FUCK!I started playing these games for the first time last month, hyped as hell for this!cw_sasukeMemberOct 27, 201729,407These classic games running on up to 120fps is gonna be nice *believeAtolmMemberOct 25, 20176,097Meh. A new Ridge Racer would be cool, this one I already finished it dozens of times. Cmon Namco, give us something.andshrewMemberOct 27, 20172,837Very nice. I've been wanting RR1 as a PS classic release but this will do nicely.MandosMemberNov 27, 201737,998Wonder if this puts Ultra Neogeo 64 table? Always wanted to play those.Bandai being on the table gives me wild dreams of souls edge and Gundam Vs XDsrtrestreOne Winged SlayerMemberOct 25, 201719,424Altima VIIMemberMar 2, 202579Yuri P said:3 minute gameplay video. People can probably figure out the exact version of the game based on the billboards around the track, it's one of the global releases.View: https://youtu.be/l67ziwkd6KU?si=Wi5YfXaNQfC-DhLmArcade Archives 2 RIDGE RACER will released on same day as Nintendo Switch 2! | HAMSTER CorporationHAMSTER Corporation (Head Office: Tokyo-to, Setagaya-ku; CEO: Satoshi HAMADA; referred to below as "HAMSTER") is happy to announce that "Arcade Archives 2 RIDGE RACER," for Nintendo Switch 2 will be released on June 5, 2025, the same day as the release of Nintendo Switch 2!www.hamster.co.jpPlatform: Nintendo Switch 2Release date: June 5, 2025 (Same Day as Nintendo Switch 2)Click to expand...Click to shrink...Definitely Switch 2 exclusive then, but this will be so nice.Given the other games people are speculating on here, I could justify a Switch 2 as near enough an arcade ports machine. Love this era and style of games. andshrewMemberOct 27, 20172,837They mention using VRR to allow the games to run at their original refresh rate:Furthermore, VRR support has been added, enabling more accurate reproduction of the original arcade game's experience.Click to expand...Click to shrink...dallow_bgMemberOct 28, 201711,602texasandshrew said:They mention using VRR to allow the games to run at their original refresh rate:Click to expand...Click to shrink...Nice catch.EddmanMemberOct 27, 2017847MexicoAwesome! I hope Joy Cons with mouse mean there's a slight chance of seeing light gun games. Things like Virtua Cop or Time Crisis shouldn't be lost forever.VespaMemberOct 26, 20171,966Yuri P said:I'm so happy about this!! I hope they also bring Ridge Racer 2 and Rave Racer as well as other System 22 gamesClick to expand...Click to shrink...These two are 8-player iirc imagine the glory of NETWORK MODEYuri PMemberOct 28, 2017311andshrew said:Very nice. I've been wanting RR1 as a PS classic release but this will do nicely.Click to expand...Click to shrink...This definitely opens the door for the PS1 release to be brought back as a PS classic. The only issue there is you obviously can't play with a NeGcon which felt a lot better than playing on the dpad.But both the arcade and PS1 versions are very historically significant games and it would be great to have access to both, especially because the difference between them is huge and a lot of people don't know about it.MaxRossMemberJan 26, 2024241Tailzo said:Please, Sega RallyClick to expand...Click to shrink...sega rally, dayotona, scud race and gti club :PTurnblMemberOct 27, 2017855Arrgghh suddenly I have a use case for a Switch 2. Just when I thought I was out..!Great great game, much prefer the beefy arcade cars to the PS port (which is also excellent in a different way). 1'05 club on Mame!jettCommunity ResettlerMemberOct 25, 201745,985It's nice that someone remembers Ridge Racer at least. I would prefer a collection or something rather than just the first arcade game.JazzmanZOne Winged SlayerMemberOct 25, 20172,955Gimmie Hydro thunderdigita1alchemyMemberOct 27, 2017969Hope this opens the floodgates. Starblade next, plz. Dash KappeiMemberNov 1, 20175,449Killer app right there, Hamster killing it as usual.As for RR, cool but I fucking hate we still have to suffer the lack of analog triggersandshrewMemberOct 27, 20172,837Yuri P said:This definitely opens the door for the PS1 release to be brought back as a PS classic. The only issue there is you obviously can't play with a NeGcon which felt a lot better than playing on the dpad.But both the arcade and PS1 versions are very historically significant games and it would be great to have access to both, especially because the difference between them is huge and a lot of people don't know about it.Click to expand...Click to shrink...I never used an NeGcon, would it's inputs translate to an analog stick?The reason I ask is interestingly enough, from what people have reversed engineered, it seems that there might have been intention to implement some kind of NeGcon emulation within the Sony PS1 emulator.Warrior of LightMemberJan 11, 201811,514Only Nintendo, again? Hamster, please.SharpX68KMemberNov 10, 201711,602ChicagolandOh wow, I missed this, as I did not watch 100% of the Direct. This is wonderful news. I'm sure more 90s 3D polygon arcade games will come.Yuri P said:I'm so happy about this!! I hope they also bring Ridge Racer 2 and Rave Racer as well as other System 22 gamesClick to expand...Click to shrink...Vespa said:Awesome to see an Arcade RR port! we inch closer to Rave Racer...Click to expand...Click to shrink...Heck yeah, Rave Racer is my favorite game in the series.Kurekure TakoraMemberOct 25, 2017866One of Namco's hidden gems. Came out in 1996 and it was just as fun as Pilot Wings 64 and Mario 64's Wing Cap. I'd love to play the game with the JoyCons serving as an arm cycle.OP OPHero of LegendThe Hero of Thread TitlesMemberOct 26, 201714,795Took the liberty of clipping the segment and adding it to the OP as it shows the game in 60fps unlike Hamster's video above for some reason.erlimMemberOct 26, 20175,923LondonAltima VII said:Definitely Switch 2 exclusive then, but this will be so nice.Given the other games people are speculating on here, I could justify a Switch 2 as near enough an arcade ports machine. Love this era and style of games.Click to expand...Click to shrink...Does this mean the entire Arcade Archives 2 line is Switch 2 exclusive? If so that is absolutely huge news for retro heads and preservationist fans!OnanieBombMemberOct 25, 201711,177I played this game for like, without exaggerating, 6 hours straight when I got it with my PS1 on christmasandshrewMemberOct 27, 20172,837erlim said:Does this mean the entire Arcade Archives 2 line is Switch 2 exclusive? If so that is absolutely huge news for retro heads and preservationist fans!Click to expand...Click to shrink...No, we've already had one Arcade Archives 2 release on PS5 (but they only use the ACA2 acronym in the store listing).erlimMemberOct 26, 20175,923Londonandshrew said:No, we've already had one Arcade Archives 2 release on PS5 (but they only use the ACA2 acronym in the store listing).Click to expand...Click to shrink...Yeah, but Neo geo releases are usually separate; like aca Neo geo was on Xbox and windows store, but those two platforms did not get other aca titles. So it could be that nintnendo rocks non Neo geo aca titles exclusively, no?
    0 Commenti 0 condivisioni 134 Views
  • WCCFTECH.COM
    G.Skill Rolls Out 128 GB Capacity DDR5 Kit, Trident Z5 Royal With 64 GB Per DIMM & 8000 MT/s Speeds
    G.Skill has rolled out the fastest high capacity Trident Z5 Royal DDR5 memory kit with 128 GB capacity and 8000 MT/s speeds.G.Skill Combines High Capacity With High Speeds In Its Latest DDR5 Trident Z5 Royal Memory Kit Launch: Up To 128 GB Kits With 8000 MT/s SpeedsPress Release: G.SKILL International Enterprise Co., Ltd., the worlds leading brand of performance overclock memory and PC components, is thrilled to announce a new high-speed memory overclock DDR5 specification with an ultra-high kit capacity DDR5-8000 CL44 with 128GB (64GBx2) kit capacity. This is the world's first DDR5 memory kit with 64GB high-capacity modules to reach the extreme overclock level of DDR5-8000, setting a new milestone for high-performance computing, content creation, AI applications, and advanced workstation workloads."We are very excited to announce the world's first overclocked DDR5-8000 memory with a total kit capacity of 128GB," says Frank Hung, Product Marketing at G.SKILL. "This is yet another milestone for DDR5 overclock performance that G.SKILL has successfully reached, surpassing all previous limits to demonstrate never-before-seen memory specifications."New Era of Overclocking High-Capacity DDR5 64GB ModulesEngineered for high-capacity overclocked performance, the DDR5-8000 128GB (64GBx2) combines ultra-high memory speed with massive memory kit capacity, surpassing the previous module capacity maximum at 48GB per module. Finally, power users and content creators who seek overclock performance memory for capacity-hungry applications will have the ideal DDR5 memory solution. Refer to the validation screenshot below to seeDDR5-8000 CL44-58-58 128GB (64GBx2)tested on theASUS ROG CROSSHAIR X870E APEXmotherboardAMD Ryzen 9 9950Xdesktop processor.Extreme Speed DDR5-9000 CL48-64-64 64GB (32GBx2) Kit SpecificationDedicated to the continual development of extreme-overclock performance memory kits, G.SKILL is also announcing an extreme-speed DDR5-9000 CL48-64-64 memory specification at a 64 GB(32GBx2)kit capacity. See the Memtest screenshot below to see this memory specification running on theASUS ROG MAXIMUS Z890 APEXmotherboard with anIntel Core Ultra 7 265Kdesktop processor.Deal of the Day
    0 Commenti 0 condivisioni 144 Views
  • WCCFTECH.COM
    NVIDIA GeForce RTX 5060 Ti Reportedly Costs The Same As 4060 Ti, 16 GB For $499 & 8 GB For $399
    NVIDIA's upcoming GeForce RTX 5060 Ti graphics cards will reportedly cost the same as the RTX 4060 Ti models with 16 GB & 8 GB options.NVIDIA GeForce RTX 5060 Ti 16 GB GPUs To Be Priced At $499, 8 GB Variants For $399In a few weeks, NVIDIA will be launching its brand-new mainstream gaming solution, the GeForce RTX 5060 Ti. This graphics card has already seen various leaks, and it looks like the latest one from Board Channels is about the pricing.Starting with the details, the NVIDIA GeForce RTX 5060 Ti will be launching on the 16th of April and is going to be priced the same as the RTX 4060 Ti. Both cards were available in two options, a 16 GB and an 8 GB model. With the RTX 5060 Ti, NVIDIA has decided to stick with the same pricing as the RTX 4060 Ti. Hence, the 16 GB variant will be priced at $499 US while the 8 GB variant will be priced at $399 US.Previously, the 16 GB model was going to launch earlier than the 8 GB model, but these plans later changed and now, both models will be available on the same day. The $100 US price difference between the 16 GB and 8 GB models will mean that users will have to pay a more significant difference to access the larger VRAM capacity.As per what we already know about the NVIDIA GeForce RTX 5060 Ti graphics card, the GPU will be the GB206-300-A1 with 4608 cores, and it will come with a 128-bit memory bus featuring 16 GB and 8 GB VRAM configurations. Due to faster 28 Gbps GDDR7 pin speeds, the card will offer 448 GB/s of total bandwidth, offering a 55% boost over the RTX 4060 Ti. The GPU will come with a 180W TDP or 20W higher than the RTX 4060 Ti.For $50 more, one can also find the RTX 5070, though those aren't available at MSRP a lot, plus the RX 9070 Non-XT variant can also be accessed. That is if the RTX 5060 Ti itself will be available in decent quantities and at MSRP. What we have noticed with the previous launches is that some of the cards are priced at MSRP at launch, but as soon as the week 1 supply extinguishes, the prices see a massive hike.Also, NVIDIA isn't cutting the prices down like the last two models, the RTX 5070 Ti & the RTX 5070, both of which saw a $50 US reduction versus their predecessors. The NVIDIA GeForce RTX 5060 Ti will be followed by the GeForce RTX 5060 and RTX 5050 GPUs, which will complete the Blackwell RTX 50 Gaming lineup for now.NVIDIA GeForce RTX 50 GPU Specs (Preliminary):Graphics Card NameNVIDIA GeForce RTX 5090NVIDIA GeForce RTX 5080NVIDIA GeForce RTX 5070 TiNVIDIA GeForce RTX 5070NVIDIA GeForce RTX 5060 TiNVIDIA GeForce RTX 5060NVIDIA GeForce RTX 5050GPU NameBlackwell GB202-300Blackwell GB203-400Blackwell GB203-300-A1Blackwell GB205-300-A1Blackwell GB206-300Blackwell GB206?Blackwell GB207-300GPU SMs170 (192 Full)84 (84 Full)70 (84 Full)50 (50 Full)36 (36 Full)3020 (20 Full)GPU Cores217601075289606144460838402560Clock Speeds2.41 GHz2.62 GHz2.45 GHz2.51 GHzTBDTBDTBDMemory Capacity32 GB GDDR716 GB GDDR716 GB GDDR712 GB GDDR716 GB / 8 GB GDDR78 GB GDDR78 GB GDDR6Memory Bus512-bit256-bit256-bit192-bit128-bit128-bit128-bitMemory Speed28 Gbps30 Gbps28 Gbps28 Gbps28 Gbps?28 Gbps?TBDBandwidth1792 GB/s960 GB/s896 GB/s672 GB/s448 GB/s448 GB/sTBDPower Interface1 12V-2x6 (16-Pin)1 12V-2x6 (16-Pin)1 12V-2x6 (16-Pin)1 12VHPWR (16-Pin)1 12VHPWR (16-Pin)1 12VHPWR (16-Pin)TBDLaunch30th January, 202530th January, 202520th February, 20255th March, 2025April 2025April 2025April 2025TBP575W360W300W250W180W150W135WPrice$1999 US$999 US$749 US$549 US$449-$399?$299?$249-$199?News Sources: Gazlog, Harukaze5719
    0 Commenti 0 condivisioni 139 Views
  • 0 Commenti 0 condivisioni 151 Views
  • 0 Commenti 0 condivisioni 110 Views
  • 0 Commenti 0 condivisioni 115 Views