• Delta Forces Black Hawk Down Campaign is Seven Chapters Long, Launch Trailer Released
    gamingbolt.com
    Team Jades Delta Force, a reboot of NovaLogics tactical shooter series, is finally getting its campaign, Black Hawk Down, in the coming days. A reboot of Ridley Scotts film and a homage to Delta Force: Black Hawk Down, its completely free with seven chapters available.Though its playable solo, the developer advises co-op play with three other players. Each chooses a class and loadout before jumping in, starting with urban warfare and eventually taking point. Mild spoilers are available for what each chapter entails, but much like the film, a friendly chopper goes down, and its your job to save them.The Black Hawk Down campaign launches on February 21st at 7 PM PST. Check out the launch trailer below for a brief look at the campaign. Delta Force is available on PC, though its also in development for Xbox, PlayStation, iOS, and Android. Check out our review of the open beta version.
    0 Reacties ·0 aandelen ·78 Views
  • Hades 2s Third Content Update Coming Some Months From Now, Says Supergiant Games
    gamingbolt.com
    With its second major update for Hades 2 now live, Supergiant Games has confirmed that its third content update will drop some months from now. However, it reiterated that its still too early to discuss what comes after (like when Version 1.0 will launch).The team will release further patches to address feedback and fix any issues arising from The Warsong Update, but it did outline some of its plans. These include hidden Aspects for the Nocturnal Arms (which will be available next time), enhanced regional bosses (We want them to be able to challenge and surprise you even more), and the rest of the story. That includes building on various character relationships and subplots.While players can fully complete runs in the Underworld and Surface, Supergiant will focus on filling out more of the games content categories, finishing more and more of what we started until were done!Hades 2 is in early access for PC via Steam and the Epic Games Store. Supergiant hasnt confirmed whether it will come to other platforms like the first game, but that may happen once Version 1.0 plans solidify. Stay tuned for updates, and in the meantime, check out our review for early access launch version.
    0 Reacties ·0 aandelen ·66 Views
  • Wikipedia picture of the day for February 20
    en.wikipedia.org
    Catherine Grand (17611835) was a French courtesan and noblewoman. Born in India as the daughter of a French East India Company officer, she married George Grand, an officer of the English East India Company. After her marriage, she had a scandalous liaison with Bengal councillor Philip Francis in Calcutta. Her husband sent her to Paris, where she became a popular courtesan, having relationships with several powerful men, and was known as Madame Grand. She became the mistress and later the wife of French diplomat Charles Maurice de Talleyrand-Prigord, the first prime minister of France. This 1783 oil-on-canvas portrait of Grand was painted by lisabeth Vige Le Brun. It was exhibited at the Salon of the Royal Academy in Paris the same year, as one of at least ten portraits submitted Le Brun, and was favourably received. The painting is now in the collection of the Metropolitan Museum of Art in New York.Painting credit: lisabeth Vige Le BrunRecently featured: Oak eggarVaduz CathedralKatharine HepburnArchiveMore featured pictures
    0 Reacties ·0 aandelen ·85 Views
  • On this day: February 20
    en.wikipedia.org
    February 20: Day of the Heavenly Hundred Heroes in Ukraine (2014) Avro CF-105 Arrow1685 The French colonization of Texas began with the landing of colonists led by Robert de La Salle near Matagorda Bay.1959 Canadian prime minister John Diefenbaker cancelled the Avro CF-105 Arrow (pictured) interceptor-aircraft program amid much political debate.1970 Wat Phra Dhammakaya, one of the largest Buddhist temples in Thailand, was founded in Pathum Thani.1998 At the age of 15, American figure skater Tara Lipinski became the then-youngest winner of an Olympic gold medal in the history of the Winter Olympic Games.Wulfric of Haselbury (d.1154)Elizabeth Holloway Marston (b.1893)Gail Kim (b.1977)Tru Takemitsu (d.1996)More anniversaries: February 19February 20February 21ArchiveBy emailList of days of the yearAbout
    0 Reacties ·0 aandelen ·80 Views
  • Konami raises base salaries of Japanese workers for fourth year in a row
    www.gamedeveloper.com
    Justin Carter, Contributing EditorFebruary 19, 20251 Min ReadImage via Bloober Team/Konami.At a GlanceKonami is raising base salaries comes weeks after it touted the Silent Hill 2 remake's surging sales. For the fourth consecutive fiscal year, Konami will increase the base salary for its Japanese developers starting in March 2026.In a translated announcement, the Silent Hill maker said the raise increases pay by 5,000 (or $32) per monthor 60,000 ($395) per year. It applies to full-time employees of its domestic groups in the aim of "creating a stable and rewarding environment for employees."Additionally, the starting salary for new graduates is being raised from the "traditional" 300,000 ($1,977) to 305,000 ($2,010)."We will continue to make human capital investments, including this base-up of basic salary, to improve employee engagement and continue to strive to create better products and services," wrote Konami.Japanese developers have frequently (and recently) given their full-time workers pay raises. Along with Konami, Elden Ring creator FromSoftware increased salary pay last year, as have Capcom and Atlus.Earlier in February, Konami changed its revenue forecast for the fiscal quarter in light of Bloober Team's Silent Hill 2. The remake of the beloved PlayStation 2 game sold 2 million copies (as of December 31, 2024), and exceeded the developer's expectations, resulting in a 32 percent year-over-year growth for its Digital Contents revenue.The publisher's next title is another remake: Metal Gear Solid Delta: Snake Eater, a remake of the 2004 stealth-action game expected to release on August 28.Read more about:KonamiAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Reacties ·0 aandelen ·89 Views
  • Path of Exile co-director Chris Wilson has left Grinding Gear Games
    www.gamedeveloper.com
    Recent document filings show Chris Wilson silently exited Grinding Gear Games, the studio co-founded in 2006.According to the New Zealand Companies Office, Wilson "ceased" his role as managing director on January 21, 2025. Grinding Gear has not announced his departure on its website or on social media. However, several Path of Exile players have noticed his absence in the forums, and asked of his whereabouts in recent months.Wilson co-directed the original Path of Exile with Mark Roberts, but did not join Roberts in directing Path of Exile II. He was a key public figure for marketing the first game, and during GDC 2019, he discussed how the studio developed the game with long-term player retention in mind.Grinding Gear released an Early Access version of Path of Exile II in December for PC, PlayStation 5, and Xbox Series X|S ahead of a full release planned for May. Originally, the developer was working on it as a new storyline that could be played separate from the first Path of Exile, but it later ballooned into its own game.On PC, the sequel opened to over 578,000 players, more than double the first game's most recent peak of 229,337 players. In January, Grinding Gear revealed it was pausing development on a new update for the original Path of Exile until after the sequel's 1.0 release.Game Developer has reached out to Grinding Gear about Wilson's departure, and will update when a response is given.Game Developer and Game Developers Conference are sibling organizations under Informa.
    0 Reacties ·0 aandelen ·104 Views
  • Rabbit shows off the AI agent it should have launched with
    www.theverge.com
    The Humane AI Pin has collapsed, but Rabbit is still kicking. The company published a blog post and video today showing off a generalist Android agent, slowly controlling apps on a tablet in much the same way that Rabbit claimed its R1 device would over a year ago. (It couldnt, and cant.) The work builds on LAM Playground, a generalist web agent Rabbit launched last year.The engineers dont use the Rabbit R1 at all for the demonstration. Instead, they type their requests into a prompt box on a laptop, which translates them to actions on an Android tablet. They task it with things like finding a YouTube video or locating a whiskey cocktail recipe in a cocktail app, gathering the ingredients, and then adding them to a Google Keep grocery list. At one point, they ask it to download the puzzle game 2048 and figure out how to play it, which it does, albeit slowly.The model generally does the things they ask, sometimes well and sometimes with quirks like sending a poem over WhatsApp one message at a time instead of in a single block. One of the engineers wonders if they should have asked it to use line breaks in their prompt, but they dont go back to try again.Rabbits AI agent is clearly still a work in progress, as it has been since the R1 launched with almost none of the capabilities that founder and CEO Jesse Lyu presented in January 2024. Rabbit has steadily rolled out updates, like the ability to train its AI agents to complete specific tasks or prompt it to remake its own interface. The examples it presented today are only the core action loop an Android agent completes, according to Rabbits blog post. The company promises to share more about its upcoming cross-platform multi-agent system in coming weeks.
    0 Reacties ·0 aandelen ·74 Views
  • Nvidia is launching priority access to help fans buy RTX 5080 and 5090 FE GPUs
    www.theverge.com
    An RTX 50-series GPU.Nvidia has yet to explain why it launched its GeForce RTX 5090 and 5080 GPUs with barely any inventory, some major launch driver issues, and the occasional melting power connector, but it has apparently reconsidered its stance when it comes to scalpers. The companys just announced a way for Nvidia fans to sign up for Verified Priority Access to buy the elusive two-slot SFF-friendly RTX 5090 and 5080 Founders Edition graphics cards.Like a similar Verified Priority Access program for the RTX 4090, the new program is invite-only, but this time youll apply for access by filling out this form rather than being pre-selected. The site will check that youve already had an Nvidia account (accounts created after January 30th need not apply) and ask you whether youd prefer a 5090 or a 5080. Then, itll apparently use an algorithm to figure out if youre a real gamer (analyzing your Nvidia app / GeForce Experience use) before offering a card. Limit one per person.Invites will begin rolling out next week, writes Nvidia. The company doesnt say how many cards have been allocated to this program, so its difficult to tell if this is a meaningful way to get cards to gamers rather than scalpers.
    0 Reacties ·0 aandelen ·76 Views
  • Learning Intuitive Physics: Advancing AI Through Predictive Representation Models
    www.marktechpost.com
    Humans possess an innate understanding of physics, expecting objects to behave predictably without abrupt changes in position, shape, or color. This fundamental cognition is observed in infants, primates, birds, and marine mammals, supporting the core knowledge hypothesis, which suggests humans have evolutionarily developed systems for reasoning about objects, space, and agents. While AI surpasses humans in complex tasks like coding and mathematics, it struggles with intuitive physics, highlighting Moravecs paradox. AI approaches to physical reasoning fall into two categories: structured models, which simulate object interactions using predefined rules, and pixel-based generative models, which predict future sensory inputs without explicit abstractions.Researchers from FAIR at Meta, Univ Gustave Eiffel, and EHESS explore how general-purpose deep neural networks develop an understanding of intuitive physics by predicting masked regions in natural videos. Using the violation-of-expectation framework, they demonstrate that models trained to predict outcomes in an abstract representation spacesuch as Joint Embedding Predictive Architectures (JEPAs)can accurately recognize physical properties like object permanence and shape consistency. In contrast, video prediction models operating in pixel space and multimodal large language models perform closer to random guessing. This suggests that learning in an abstract space, rather than relying on predefined rules, is sufficient to acquire an intuitive understanding of physics.The study focuses on a video-based JEPA model, V-JEPA, which predicts future video frames in a learned representation space, aligning with the predictive coding theory in neuroscience. V-JEPA achieved 98% zero-shot accuracy on the IntPhys benchmark and 62% on the InfLevel benchmark, outperforming other models. Ablation experiments revealed that intuitive physics understanding emerges robustly across different model sizes and training durations. Even a small 115 million parameter V-JEPA model or one trained on just one week of video showed above-chance performance. These findings challenge the notion that intuitive physics requires innate core knowledge and highlight the potential of abstract prediction models in developing physical reasoning.The violation-of-expectation paradigm in developmental psychology assesses intuitive physics understanding by observing reactions to physically impossible scenarios. Traditionally applied to infants, this method measures surprise responses through physiological indicators like gaze time. More recently, it has been extended to AI systems by presenting them with paired visual scenes, where one includes a physical impossibility, such as a ball disappearing behind an occluder. The V-JEPA architecture, designed for video prediction tasks, learns high-level representations by predicting masked portions of videos. This approach enables the model to develop an implicit understanding of object dynamics without relying on predefined abstractions, as shown through its ability to anticipate and react to unexpected physical events in video sequences.V-JEPA was tested on datasets such as IntPhys, GRASP, and InfLevel-lab to benchmark intuitive physics comprehension, assessing properties like object permanence, continuity, and gravity. Compared to other models, including VideoMAEv2 and multimodal language models like Qwen2-VL-7B and Gemini 1.5 pro, V-JEPA achieved significantly higher accuracy, demonstrating that learning in a structured representation space enhances physical reasoning. Statistical analyses confirmed its superiority over untrained networks across multiple properties, reinforcing that self-supervised video prediction fosters a deeper understanding of real-world physics. These findings highlight the challenge of intuitive physics for existing AI models and suggest that predictive learning in a learned representation space is key to improving AIs physical reasoning abilities.In conclusion, the study explores how state-of-the-art deep learning models develop an understanding of intuitive physics. The model demonstrates intuitive physics comprehension without task-specific adaptation by pretraining V-JEPA on natural videos using a prediction task in a learned representation space. Results suggest this ability arises from general learning principles rather than hardwired knowledge. However, V-JEPA struggles with object interactions, likely due to training limitations and short video processing. Enhancing model memory and incorporating action-based learning could improve performance. Future research may examine models trained on infant-like visual data, reinforcing the potential of predictive learning for physical reasoning in AI.Check outthePaper.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our75k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Microsoft AI Releases OmniParser V2: An AI Tool that Turns Any LLM into a Computer Use AgentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Enhancing Diffusion Models: The Role of Sparsity and Regularization in Efficient Generative AISana Hassanhttps://www.marktechpost.com/author/sana-hassan/Rethinking AI Safety: Balancing Existential Risks and Practical ChallengesSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Nous Research Released DeepHermes 3 Preview: A Llama-3-8B Based Model Combining Deep Reasoning, Advanced Function Calling, and Seamless Conversational Intelligence
    0 Reacties ·0 aandelen ·97 Views
  • AI Agent Developer: A Journey Through Code, Creativity, and Curiosity
    towardsai.net
    Author(s): Talha Nazar Originally published on Towards AI. Image By AuthorArtificial Intelligence (AI) agents are no longer just science fiction theyre transforming industries, automating mundane tasks, and solving complex problems that were once thought impossible. From virtual assistants like Siri to autonomous robots in warehouses, AI agents have become indispensable. But how does one become an expert in developing these intelligent systems?This story will take you on a realistic journey through the life of Alex, an aspiring AI agent developer. Following Alexs footsteps, youll learn everything from foundational concepts to advanced techniques, complete with practical examples, visualizations, and links to resources. Lets dive in.Laying the Foundation Understanding What an AI Agent IsWhat is an AI Agent?An AI agent is a system capable of perceiving its environment, making decisions, and taking actions to achieve specific goals. Unlike traditional software programs, AI agents use machine learning models to adapt their behavior based on data.Key Components of an AI AgentPerception: Sensors or input mechanisms to gather information about the environment.Decision-Making: Algorithms to process inputs and decide on actions.Action Execution: Mechanisms to interact with the environment.Learning: Ability to improve performance over time using feedback loops.Example: Imagine building a chatbot that answers customer queries. It perceives user input (text), decides on a response using natural language processing (NLP), executes the action (sending the reply), and learns from past interactions to enhance future responses.Graphical Visualization+-------------------+| Environment |+-------------------+ | v+-------------------+| Perception |+-------------------+ | v+-------------------+| Decision-Making |+-------------------+ | v+-------------------+| Action Execution |+-------------------+ | v+-------------------+| Learning |+-------------------+Building Blocks of AI AgentsTo create robust AI agents, we need to understand several key technologies:1. Machine Learning BasicsMachine learning (ML) enables AI agents to learn patterns from data without explicit programming. There are three main types:Supervised Learning: Training a model with labeled data.Unsupervised Learning: Finding hidden structures in unlabeled data.Reinforcement Learning: Teaching an agent to make sequential decisions through rewards and penalties.Practical Example: Suppose you want your AI agent to classify emails as spam or not spam. Youd use supervised learning with labeled email datasets.Email Spam Classifier Using Scikit-Learnfrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.model_selection import train_test_splitfrom sklearn.naive_bayes import MultinomialNB# Sample datasetemails = ["Win money now!", "Meeting scheduled", "Free lottery tickets"]labels = [1, 0, 1] # 1 = Spam, 0 = Not Spam# Convert text into numerical featuresvectorizer = CountVectorizer()X = vectorizer.fit_transform(emails)# Split data into training and testing setsX_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.2, random_state=42)# Train a Naive Bayes classifiermodel = MultinomialNB()model.fit(X_train, y_train)# Evaluate the modelaccuracy = model.score(X_test, y_test)print(f"Accuracy: {accuracy}")Explanation:CountVectorizer converts text into numerical vectors.MultinomialNB is a probabilistic model suitable for text classification.We evaluate the models accuracy on unseen data.Learn More About Scikit-Learn2. Natural Language Processing (NLP)NLP allows AI agents to understand and generate human language. Libraries like NLTK, SpaCy, and Hugging Face Transformers simplify NLP tasks.Practical Example: Creating a sentiment analysis tool to determine whether a review is positive or negative.Sentiment Analysis Using Hugging Face Transformersfrom transformers import pipeline# Load pre-trained sentiment analysis pipelinesentiment_pipeline = pipeline("sentiment-analysis")# Analyze sentiment of a sample textresult = sentiment_pipeline("I love this product!")print(result)Output:[{'label': 'POSITIVE', 'score': 0.9987}]Explanation:The pipeline function loads a pre-trained model fine-tuned for sentiment analysis.This approach leverages transfer learning, where a general-purpose model is adapted for specific tasks.Hugging Face Documentation3. Reinforcement Learning (RL)RL is ideal for scenarios requiring decision-making under uncertainty, such as game-playing agents or autonomous vehicles.Setting Up Your Development EnvironmentTo become an AI agent developer, you need the right tools. Heres how to set up your development environment:Step 1: Install PythonPython is the most popular programming language for AI development due to its simplicity and extensive libraries. Download it from python.org and install it on your machine.Step 2: Install Essential LibrariesYoull need several libraries to build AI agents. Use pip to install them:pip install numpy pandas matplotlib scikit-learn tensorflow keras gymNumPy: For numerical computations.Pandas: For data manipulation.Matplotlib: For data visualization.Scikit-Learn: For machine learning algorithms.TensorFlow/Keras: For deep learning models.Gym: For reinforcement learning environments.Step 3: Choose an IDEIntegrated Development Environments (IDEs) like VS Code, PyCharm, or Jupyter Notebook make coding easier. I recommend starting with Jupyter Notebook for its interactive nature.Building Your First AI AgentLets build a simple AI agent using Reinforcement Learning (RL). RL is a type of machine learning where an agent learns to perform tasks by interacting with an environment and receiving rewards or penalties.Example 1: CartPole ProblemThe CartPole problem is a classic RL task where the goal is to balance a pole on a moving cart. Well use the OpenAI Gym library to simulate this environment.Step 1: Import Librariesimport gymimport numpy as npfrom collections import dequeimport matplotlib.pyplot as pltStep 2: Initialize the Environmentenv = gym.make('CartPole-v1')state_size = env.observation_space.shape[0]action_size = env.action_space.nprint(f"State Size: {state_size}, Action Size: {action_size}")Here, state_size represents the number of variables describing the environment (e.g., cart position, velocity), and action_size represents the possible actions (e.g., move left or right).Step 3: Define the AgentWell create a simple Q-learning agent. Q-learning is a model-free RL algorithm that learns the value of actions in each state.class QLearningAgent: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.q_table = np.zeros((state_size, action_size)) self.learning_rate = 0.1 self.discount_factor = 0.95 self.epsilon = 1.0 # Exploration rate def choose_action(self, state): if np.random.rand() <= self.epsilon: return env.action_space.sample() # Explore return np.argmax(self.q_table[state]) # Exploit def learn(self, state, action, reward, next_state): old_value = self.q_table[state, action] next_max = np.max(self.q_table[next_state]) new_value = (1 - self.learning_rate) * old_value + self.learning_rate * (reward + self.discount_factor * next_max) self.q_table[state, action] = new_valueStep 4: Train the Agentagent = QLearningAgent(state_size, action_size)episodes = 1000scores = deque(maxlen=100)for episode in range(episodes): state = env.reset() total_reward = 0 done = False while not done: action = agent.choose_action(state) next_state, reward, done, _ = env.step(action) agent.learn(state, action, reward, next_state) state = next_state total_reward += reward scores.append(total_reward) avg_score = np.mean(scores) if episode % 100 == 0: print(f"Episode: {episode}, Average Score: {avg_score}")Step 5: Visualize Resultsplt.plot(scores)plt.xlabel("Episode")plt.ylabel("Score")plt.title("Training Progress")plt.show()Example 2: Tic-Tac-Toe RL Agentimport gymimport numpy as np# Create a custom Tic-Tac-Toe environmentclass TicTacToeEnv(gym.Env): def __init__(self): self.board = np.zeros((3, 3)) self.action_space = gym.spaces.Discrete(9) self.observation_space = gym.spaces.Box(low=-1, high=1, shape=(3, 3)) def step(self, action): row, col = divmod(action, 3) if self.board[row][col] != 0: return self.board.flatten(), -10, False, {} # Penalize invalid moves self.board[row][col] = 1 done = self.check_winner() or np.all(self.board != 0) reward = 1 if done else 0 return self.board.flatten(), reward, done, {} def reset(self): self.board = np.zeros((3, 3)) return self.board.flatten() def check_winner(self): # Check rows, columns, diagonals for a winner passenv = TicTacToeEnv()state = env.reset()for _ in range(100): action = env.action_space.sample() # Random action state, reward, done, info = env.step(action) if done: print("Game Over") breakExplanation:The gym.Env class defines the environment dynamics.The agent interacts with the environment step() and receives rewards/penalties.OpenAI Gym DocumentationAdvanced Techniques for AI Agent Development1. Multi-Agent SystemsIn some applications, multiple AI agents collaborate or compete. For instance, self-driving cars must coordinate with other vehicles.2. Explainability and DebuggingAs AI agents grow more complex, ensuring transparency becomes crucial. Tools like SHAP and LIME help interpret model predictions.3. Transfer LearningLeverage pre-trained models to solve similar problems faster. For instance, use a pre-trained vision model for object detection in autonomous vehicles.Practical Applications and Career PathwaysAI agents have countless real-world applications:Healthcare: Diagnosing diseases using medical imaging.Finance: Algorithmic trading and fraud detection.Entertainment: Game-playing agents like AlphaGo.To succeed as an AI agent developer, focus on:Continuous Learning: Stay updated with research papers and online courses.Portfolio Building: Develop projects and share them on GitHub.Networking: Join AI communities and attend conferences.Conclusion: Your Path ForwardBy mastering the concepts outlined above machine learning, NLP, reinforcement learning, multi-agent systems, and explainability you can develop cutting-edge AI agents. Remember, becoming an expert requires continuous learning and experimentation.Next Steps:Explore Kaggle Competitions for hands-on practice.Join communities like Reddits r/MachineLearning for discussions.Contribute to open-source projects on GitHub.Citations and ReferencesGoodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.Hugging Face. (n.d.). Retrieved from https://huggingface.co/Thank you for taking the time to read this! If you found it insightful, clap, comment, and share it with others who might benefit. This was a basic introduction, but in my next story, Ill dive deeper into the details stay tuned!Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Reacties ·0 aandelen ·111 Views