• TOWARDSAI.NET
    Building Large Action Models: Insights from Microsoft
    Building Large Action Models: Insights from Microsoft 0 like January 7, 2025Share this postAuthor(s): Jesus Rodriguez Originally published on Towards AI. Created Using MidjourneyI recently started an AI-focused educational newsletter, that already has over 175,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:TheSequence | Jesus Rodriguez | SubstackThe best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and datathesequence.aiAction execution is one of the key building blocks of agentic workflows. One of the most interesting debates in that are is whether actions are executed by the model itself or by an external coordination layer. The supporters of the former hypothesis have lined up behind a theory known as large action models(LAMs) with projects like Gorilla or Rabbit r1 as key pioneers. However, there are still only a few practical examples of LAM frameworks. Recently, Microsoft Research published one of the most complete papers in this area outlining a complete framework for LAM models. Microsofts core idea is to simply bridge the gap between the language understanding prowess of LLMs and the need for real-world action execution.From LLMs to LAMs: A Paradigm ShiftThe limitations of traditional LLMs in interacting with and manipulating the physical world necessitate the development of LAMs. While LLMs excel at generating intricate textual responses, their inability to translate understanding into tangible actions restricts their applicability in real-world scenarios. LAMs address this challenge by extending the expertise of LLMs from language processing to action generation, enabling them to perform actions in both physical and digital environments. This transition signifies a shift from passive language understanding to active task completion, marking a significant milestone in AI development.Image Credit: Microsoft ResearchKey Architectural Components: A Step-by-Step ApproachMicrosofts framework for developing LAMs outlines a systematic process, encompassing crucial stages from inception to deployment. The key architectural components include:Data Collection and PreparationThis foundational step involves gathering and curating high-quality, action-oriented data for specific use cases. This data includes user queries, environmental context, potential actions, and any other relevant information required to train the LAM effectively. A two-phase data collection approach is adopted:Task-Plan CollectionThis phase focuses on collecting data consisting of tasks and their corresponding plans. Tasks represent user requests expressed in natural language, while plans outline detailed step-by-step procedures designed to fulfill these requests. This data is crucial for training the model to generate effective plans and enhance its high-level reasoning and planning capabilities. Sources for this data include application documentation, online how-to guides like WikiHow, and historical search queries.Task-Action CollectionThis phase converts task-plan data into executable steps. It involves refining tasks and plans to be more concrete and grounded within a specific environment. Action sequences are generated, representing actionable instructions that directly interact with the environment, such as select_text(text=hello) or click(on=Button(20), how=left, double=False). This data provides the necessary granularity for training a LAM to perform reliable and accurate task executions in real-world scenarios.Image Credit: Microsoft ResearchModel TrainingThis stage involves training or fine-tuning LLMs to perform actions rather than merely generate text. A staged training strategy, consisting of four phases, is employed:Phase 1: Task-Plan Pretraining: This phase focuses on training the model to generate coherent and logical plans for various tasks, utilizing a dataset of 76,672 task-plan pairs. This pretraining establishes a foundational understanding of task structures, enabling the model to decompose tasks into logical steps.Phase 2: Learning from Experts: The model learns to execute actions by imitating expert-labeled task-action trajectories. This phase aligns plan generation with actionable steps, teaching the model how to perform actions based on observed UI states and corresponding actions.Phase 3: Self-Boosting Exploration: This phase encourages the model to explore and handle tasks that even expert demonstrations failed to solve. By interacting with the environment and trying alternative strategies, the model autonomously generates new success cases, promoting diversity and adaptability.Phase 4: Learning from a Reward Model: This phase incorporates reinforcement learning (RL) principles to optimize decision-making. A reward model is trained on success and failure data to predict the quality of actions. This model is then used to fine-tune the LAM in an offline RL setting, allowing the model to learn from failures and improve action selection without additional environmental interactions.Image Credit: Microsoft ResearchIntegration and GroundingThe trained LAM is integrated into an agent framework, enabling interaction with external tools, maintaining memory, and interfacing with the environment. This integration transforms the model into a functional agent capable of making meaningful impacts in the physical world. Microsofts UFO, a GUI agent for Windows OS interaction, exemplifies this integration. The AppAgent within UFO serves as the operational platform for the LAM.EvaluationRigorous evaluation processes are essential to assess the reliability, robustness, and safety of the LAM before real-world deployment. This evaluation involves testing the model in a variety of scenarios to ensure generalization across different environments and tasks, as well as effective handling of unexpected situations. Both offline and online evaluations are conducted:Offline Evaluation: The LAMs performance is assessed using an offline dataset in a controlled, static environment. This allows for systematic analysis of task success rates, precision, and recall metrics.Online Evaluation: The LAMs performance is evaluated in a real-world environment. This involves measuring aspects like task completion accuracy, efficiency, and effectiveness.Image Credit: Microsoft ResearchKey Building Blocks: Essential Features of LAMsSeveral key building blocks empower LAMs to perform complex real-world tasks:Action Generation: The ability to translate user intentions into actionable steps grounded in the environment is a defining feature of LAMs. These actions can manifest as operations on graphical user interfaces (GUIs), API calls for software applications, physical manipulations by robots, or even code generation.Dynamic Planning and Adaptation: LAMs are capable of decomposing complex tasks into subtasks and dynamically adjusting their plans in response to environmental changes. This adaptive planning ensures robust performance in dynamic, real-world scenarios where unexpected situations are common.Specialization and Efficiency: LAMs can be tailored for specific domains or tasks, achieving high accuracy and efficiency within their operational scope. This specialization allows for reduced computational overhead and improved response times compared to general-purpose LLMs.Agent Systems: Agent systems provide the operational framework for LAMs, equipping them with tools, memory, and feedback mechanisms. This integration allows LAMs to interact with the world and execute actions effectively. UFOs AppAgent, for example, employs components like action executors, memory, and environment data collection to facilitate seamless interaction between the LAM and the Windows OS environment.The UFO Agent: Grounding LAMs in Windows OSMicrosofts UFO agent exemplifies the integration and grounding of LAMs in a real-world environment. Key aspects of UFO include:Architecture: UFO comprises a HostAgent for decomposing user requests into subtasks and an AppAgent for executing these subtasks within specific applications. This hierarchical structure facilitates the handling of complex, cross-application tasks.AppAgent Structure: The AppAgent, where the LAM resides, consists of:Environment Data Collection: The agent gathers information about the application environment, including UI elements and their properties, to provide context for the LAM.LAM Inference Engine: The LAM, serving as the brain of the AppAgent, processes the collected information and infers the necessary actions to fulfill the user request.Action Executor: This component grounds the LAMs predicted actions, translating them into concrete interactions with the applications UI, such as mouse clicks, keyboard inputs, or API calls.Memory: The agent maintains a memory of previous actions and plans, providing crucial context for the LAM to make informed and adaptive decisions.Image Credit: Microsoft ResearchEvaluation and Performance: Benchmarking LAMsMicrosoft employs a comprehensive evaluation framework to assess the performance of LAMs in both controlled and real-world environments. Key metrics include:Task Success Rate (TSR): This measures the percentage of tasks successfully completed out of the total attempted. It evaluates the agents ability to accurately and reliably complete tasks.Task Completion Time: This measures the total time taken to complete a task, from the initial request to the final action. It reflects the efficiency of the LAM and agent system.Object Accuracy: This measures the accuracy of selecting the correct UI element for each task step. It assesses the agents ability to interact with the appropriate UI components.Step Success Rate (SSR): This measures the percentage of individual steps completed successfully within a task. It provides a granular assessment of action execution accuracy.In online evaluations using Microsoft Word as the target application, LAM achieved a TSR of 71.0%, demonstrating competitive performance compared to baseline models like GPT-4o. Importantly, LAM exhibited superior efficiency, achieving the shortest task completion times and lowest average step latencies. These results underscore the efficacy of Microsofts framework in building LAMs that are not only accurate but also efficient in real-world applications.LimitationsDespite the advancements made, LAMs are still in their early stages of development. Key limitations and future research areas include:Safety Risks: The ability of LAMs to interact with the real world introduces potential safety concerns. Robust mechanisms are needed to ensure that LAMs operate safely and reliably, minimizing the risk of unintended consequences.Ethical Considerations: The development and deployment of LAMs raise ethical considerations, particularly regarding bias, fairness, and accountability. Future research needs to address these concerns to ensure responsible LAM development and deployment.Scalability and Adaptability: Scaling LAMs to new domains and tasks can be challenging due to the need for extensive data collection and training. Developing more efficient training methods and exploring techniques like transfer learning are crucial for enhancing the scalability and adaptability of LAMs.ConclusionMicrosofts framework for building LAMs represents a significant advancement in AI, enabling a shift from passive language understanding to active real-world engagement. The frameworks comprehensive approach, encompassing data collection, model training, agent integration, and rigorous evaluation, provides a robust foundation for building LAMs. While challenges remain, the transformative potential of LAMs in revolutionizing human-computer interaction and automating complex tasks is undeniable. Continued research and development efforts will pave the way for more sophisticated, reliable, and ethically sound LAM applications, bringing us closer to a future where AI seamlessly integrates with our lives, augmenting human capabilities and transforming our interaction with the world around us.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Commenti 0 condivisioni 133 Views
  • WWW.IGN.COM
    Private Division Games including Tales of the Shire and Kerbal Space Program to Be Distributed by New Label From Annapurna Interactive's Former Staff
    In a rather unusual merging of two completely separate reported stories from last year, the former staff of Annapurna Interactive are seemingly preparing to take over the portfolio of shuttered indie label Private Division. At least some of the remaining Private Division employees are expected to be laid off in the process.This is according to a report from Bloomberg, which states that a currently unnamed company staffed by former Annapurna Interactive employees has reached a deal with private equity firm Haveli to take over the distribution of Private Division titles. Bloomberg reports and IGN can independently confirm that Haveli is the company that purchased Private Division from Take-Two Interactive last year for an undisclosed amount.The portfolio includes both current and upcoming Private Division titles, such as Tales of the Shire, which releases March 25. It also includes Project Bloom, a AAA action-adventure game developed by Game Freak that was announced back in 2023 with nothing more than a concept art teaser.Concept art for Game Freak's Project Bloom, formerly published by Private Division.As Bloomberg reports, Haveli's purchase of Private Division included not just the portfolio, but 20 employees who remained with the label following Take-Two-implemented layoffs last spring. Remaining employees have reportedly been told to explore other employment options, with the expectation that at least some of them will be laid off as part of the deal with the unnamed new publisher.Private Division was formerly Take-Two's publishing label, which the company founded back in 2017. It was intended to support independent games that were smaller than the fare typically supported by the Grand Theft Auto publisher. Over the years, Private Division produced titles such as The Outer Worlds, OlliOlli World, and Kerbal Space Program 2, but game sales repeatedly fell short of Take-Two expectations. Early last year, we reported that Take-Two was slowly shuttering operations at Private Division, first by winding down operations at its supported studios and then by selling off the label.The new owners of Private Division's portfolio are a group of former employees of Annapurna Interactive that collectively resigned last year following a leadership dispute at Annapurna. We reported last fall on the messy circumstances, which left Annapurna seeking to restaff an entire publishing team to cover its numerous obligations and roughly 25 individuals seeking new employment.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.
    0 Commenti 0 condivisioni 118 Views
  • WWW.IGN.COM
    Sony Debuts Tech Where Players Can See, Shoot, and Smell Baddies From Games Like The Last of Us
    Sony has debuted conceptual technology at CES 2025 that would essentially allow gamers to enter the worlds of PlayStation games such as The Last of Us and see, shoot, and even smell the baddies.Footage of the Future Immersive Entertainment Concept was shared in the video below, and shows a giant cube players enter similar to the multi-person virtual reality experiences currently available. This isn't VR, however, as the cube is instead made of incredibly high definition screens that present the game world around the player.PlayIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250Looking to boost the immersion further, players are also delivered "engaging audio" and even "scent and atmospherics with interactive PlayStation game content." The players in the video also had imitation weapons and would shoot at the screens as clickers appeared, presumably getting a whiff of rotten fungus and other offenses as they did so.Sony Future Immersive Entertainment Concept - The Last of UsThe demo was made using gameplay pulled from The Last of Us but obviously adapted to work for the Future Immersive Entertainment Concept, so unfortunately for fans is not a further look into the beloved world.Even if it was, of course, this technology is still years away and presumably incredibly expensive, so there are myriad limitations to actually getting it into the public's hands (or getting the public into it). Sony may showcase it with other franchises in the future too, such as God of War, Horizon, and so on.The Last of Us has otherwise been dormant since 2020 when Part 2 was released, outside of a remake of the first game and remaster of the second. Another entry may not appear for a while either, as a multiplayer take was recently scrapped and developer Naughty Dog is currently focused on a new sci-fi franchise called Intergalactic: The Heretic Prophet.Ryan Dinsdale is an IGN freelance reporter. He'll talk about The Witcher all day.
    0 Commenti 0 condivisioni 137 Views
  • WWW.DENOFGEEK.COM
    Squid Game Season 3 Has Three Games to Go: Heres What They Could Be
    This article contains spoilers for Squid Game season 2.Unlike the first season of Squid Game, season 2 ends before the games are truly done. Despite his best efforts to end the games early, first through democracy and when that doesnt work, through rebellion, Seong Gi-hun (Lee Jung-jae) and the survivors will likely have to face a few more deadly games when the series returns for its third season.We may not yet know when this year well get to see season 3, but that hasnt stopped fans from theorizing what games the players could face when Squid Game returns. Based on the number of games we saw in season 1, its been surmised that there will likely be three more challenges next season.One of those games is thought to be some version of monkey bars. Fans on Reddit and TikTok have pointed out that the stick figures on the walls of the players quarters in season 2 look like they are swinging across something. In season 1, tug of war and the glass bridge were both games that forced the losers to fall from a great height, so a deadly version of monkey bars that forces the players to either have to hang on for long periods of time or knock players off to swing across isnt that wild of a guess. @justthenobodys Replying to @redmoonisblue Squid Game Season 3 Theory! #fyp #foryou #squidgame original sound JustTheNobodys Another game that can be seen on the walls in the background of season 2 is something that looks an awful lot like chess or checkers. According to fan theories, this version of chess would likely involve the players stepping in as the pieces with Gi-hun in charge of moving one group across the board and the Front Man (Lee Byung-hun) controlling the other group. Since this is Squid Game, however, that means that any players taken off the board would likely be killed. So of course this means that Gi-hun would likely be trying to play in a way that keeps as many people alive on both sides as possible, while the Front Man would be looking to get as many pieces off the board as possible.The third game is hinted at in the mid-credits scene of season 2, where we see a railroad crossing with a green light and red light along with the infamous Red Light, Green Light doll in addition to a new boy doll weve never seen before. This could indicate one of two things according to TikToker MidwestMarvelGuy. The first theory is that this game will include some twisted version of the Trolley Problem a popular philosophical debate that asks whether a person would intervene to save one person over many. This could very well be the basis of a punishment for the players who took part in the rebellion, especially for Gi-hun.However, the true game thats likely at the heart of this potential challenge is called Dong Dong Dong Dae-mun. This game involves two kids forming an arch while other kids form a line and run underneath them as they sing a nursery rhyme of the same name. Once the song stops, the arch comes down, and anyone caught in their arms is out. We all know that Squid Game loves to give us creepy nursery rhymes, and that could explain the addition of the boy robot. The two robots would likely be the gatekeepers as the players run underneath them with the lights also indicating when to stop and when to go. @midwestmarvelguy Im 100% certain THIS is what the mid-credits scene means in Squid Game 2! #squidgame #squidgameseason2 #squidgameedit #netflix @Netflix Pink Soldiers 23 All of these games sound like theyd be terrifying to play in this scenario, so best of luck to the players if any of these theories end up coming true. Regardless of what games season 3 has in store for Gi-hun and the survivors, well be on the edge of our seats, cheering them on.All seven episodes of Squid Game season 2 are available to stream on Netflix now.
    0 Commenti 0 condivisioni 135 Views
  • The Belly Bumpers Preview is Now Available for Xbox Insiders!
    Xbox Insiders can now join the preview for Belly Bumpers! Use your belly to knockout other players in this 2-8 online and local party game. Eat juicy burgers to increase the size of your belly or force-feed opponents until they burst. Bump it out in a world of food-themed stages.About Belly BumpersLocal and Online MultiplayerBump your friends out of the arena in this belly-centric party game. Play with 2-8 local players on a single device! Cross-platform online multiplayer is also supported for 2-8 players.Eat BurgersEat burgers to increase the size of your belly and the power of your bumpers. Be careful as you eat a variety of foods because some have horrible side effects!Force-Feed OpponentsForce-feed opponents until their bellies burst by slapping them in the face with food!Food ArenasBump it out on over 10 food-themed stages. Each stage has unique interactive food elements such as stretchy licorice!Custom GamesPlay with only the tastiest foods. Set your own rules, modify bump powers, choose stages, and more with in our customizable games.How to Participate:Sign in on your Xbox Series X|S or Xbox One console and launch the Xbox Insider Hub app (install the Xbox Insider Hub from the Store first if necessary).Navigate to Previews > Belly Bumpers.Select Join.Wait for the registration to complete, and you should be directed to the Store page to install it.How to Provide Feedback:If you experience any issues while playing Belly Bumpers, dont forget to use Report a problem so we can investigate:Hold down the home button on your Xbox controller.Select Report a problem.Select the Games category and Belly Bumpers subcategory.Fill out the form with the appropriate details to help our investigation.Other resources:For more information: follow us on Twitter at @XboxInsider and this blog for release notes, announcements, and more. And feel free to interact with the community on the Xbox Insider SubReddit.
    0 Commenti 0 condivisioni 136 Views
  • 9TO5MAC.COM
    Swift Student Challenge kicks off February 3 for three weeks only
    Apple is preparing to kick off its annual Swift Student Challenge again this year, with entries opening in just under a month and lasting three weeks only. Here are the details.350 winners will be selected, with 50 invited to three-day Cupertino experienceApple runs the Swift Student Challenge annually as an opportunity to motivate and recognize the creativity and excellence of student developers.From Apples website:Apple is proud to support and uplift the next generation of developers, creators, and entrepreneurs with the SwiftStudentChallenge. The Challenge has given thousands of student developers the opportunity to showcase their creativity and coding capabilities through app playgrounds, and learn real-world skills that they can take into their careers andbeyond.This year, Apple plans to select 350 winners total whose submissions demonstrate excellence in innovation, creativity, social impact, or inclusivity.Out of those 350, a smaller group of 50 will earn the title Distinguished Winners and be invited to three days in Cupertino (presumably at WWDC 2025).The 2025 Swift Student Challenge will begin accepting applications on Monday, February 3 and applications will close three weeks later.To prepare for the Challenge, Apple is inviting both students and educators to join an upcoming online session where more info about the requirements will be shared, plus tips and inspiration from a former Challenge winner and more.Last years Distinguished Winners got to mingle with Apple CEO Tim Cook at Apple Park before WWDCs kickoff. So if you need additional motivation, a photo op with the CEO himself may be in the cards.Are you going to participate in Apples Swift Student Challenge this year? Let us know in the comments.Best Mac accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commenti 0 condivisioni 124 Views
  • 9TO5MAC.COM
    You can manually disable certain Apple Intelligence features, heres how
    Apple Intelligences ever-growing feature set has brought additional storage requirements on your device, but its also come with new controls over which features are enabled. Heres how to manually disable certain Apple Intelligence features on your iPhone and more.Screen Time includes method to disable three types of Apple Intelligence featuresApple Intelligence is mostly an all-or-nothing feature set.When you enable AI from your iPhones Settings app, or as part of an iOS setup walkthrough, youre activating nearly the entire Apple Intelligence feature set.But theres also a way to selectively scale back.Inside Screen Time, Apple has built in options to disable or enable three different categories of Apple Intelligence:Image CreationWriting Tools ChatGPT ExtensionThe first category applies to Image Playground, Genmoji, and Image Wand. Theres no way to turn off just one of these features, but you can disable all of them with a single control.Writing Tools refers to the AI tools to compose, proofread, or rewrite or reformat your text.And ChatGPT is self-explanatory. Though its perhaps an odd addition, since theres already a separate ChatGPT toggle inside Apple Intelligences own Settings menu.How to disable certain Apple Intelligence featuresTo find the above options inside Screen Time, here are the steps youll need to follow.Open the Settings appGo to the Screen Time menuOpen Content & Privacy RestrictionsMake sure that the green toggle at the top is onThen open Intelligence & Siri to find the AI controlsAfter youve disabled a given feature, youll notice that even UI elements referencing it will disappear.For example, disabling Image Creation will remove the glowing Genmoji icon from the emoji keyboard. And disabling Writing Tools will remove the icon from Notes toolbar, and the copy/paste menu.Note: in my testing, it usually takes a little time or an app force-quit before the relevant AI interface elements actually disappear.Do you plan to disable any Apple Intelligence features? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commenti 0 condivisioni 133 Views
  • 9TO5MAC.COM
    ShiftCam unveils PLANCK: An ultra-tiny SSD for recording ProRes video on your iPhone
    ShiftCam, a company behind many great iPhone accessories for content creators, unveiled PLANCK at CES 2025. The company calls it the worlds smallest portable SSD, and itll be great for helping you record large videos on your iPhone.PLANCK comes in at just 10 grams, and plugs directly into your iPhones USB-C port. It offers transfer speeds up to 1050 MB/s, and supports recording ProRes at 4K/120 on iPhone 15 Pro and later. Its certainly impressively small, and makes recording large videos on your iPhone a lot easier, especially since Apple charges an arm and a leg for higher-capacity iPhone storage.ShiftCam says PLANCK is drop-proof, IP65 water-resistant, and tough enough to travel wherever your creativity demands. Its built to handle the demands of on-the-go content creation.In their press release, ShiftCam CEO Benson Chiu said the following:Our mission with PLANCK is to eliminate the barriers of traditional storage and empowercreators to focus on their craft. Weve designed a toolthats so portable and powerful, it becomes an invisible yet essential part ofany creators workflow.PLANCK will be available in both 1TB and 2TB capacities, and will launch in the spring for $189 and $299, respectively.ShiftCam will also be opening a Kickstarter campaign next month, where you can preorder PLANCK at a lower price. During the preorder period, youll be able to grab 1TB for $125, and 2TB for $199. However, they havent yet announced the Kickstarter page.ShiftCam also announced a USB-C hub, allowing you to connect multiple PLANCK SSDs to your iPhone if youd like. Pricing for this hub is not yet available.Follow Michael:X/Twitter,Bluesky,InstagramAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commenti 0 condivisioni 136 Views
  • FUTURISM.COM
    New Orleans Terrorist Used Meta Ray-Ban Smart Glasses, FBI Says
    The footage is bone-chilling.FPV TerrorismThe FBI has released extensive first-person footage recorded by Shamsud-Din Jabbar, the man who drove a rented pickup truck into a crowd in New Orleans on New Year's Day, killing 14 people.Some of the footage was recorded on a pair of Meta's Ray-Bans smart glasses, catapulting the gadget into the center of that grisly scene, which ended in a police shootout resulting in Jabbar's death.The recordings show Jabbar walking through the streets of New Orleans ahead of the attack, even giving us a close glimpse of his face as he peers into a mirror while wearing the glasses. Security footage shows him wearing the glasses while preparing for the unconscionable act, likely allowing him to record the scene without drawing much attention.It's a bone-chilling and intimate perspective into the mind of a killer, facilitated by a pair of critically acclaimed smart glasses that turned into a sleeper hit for Meta last year.Attack PlanningThough he seems to have used them during planning, Jabbar didn't turn on the glasses during the attack."Meta glasses appear to look like regular glasses but they allow a user to record video and photos hands-free," FBI New Orleans Special Agent Lyonel Myrthil told reporters, as quoted by Gizmodo. "They also allow the user to potentially livestream through their video. Jabbar was wearing a pair of Meta glasses when he conducted the attack on Bourbon Street. But he did not activate the glasses to livestream his actions that day."Meta was caught off guard by the glasses' popularity, with CEO Mark Zuckerberg claiming that they were a "bigger hit sooner than we expected" during a July earnings call, with demand "still outpacing our ability to build them."That's despite several companies, including Google and Snapchat, releasing similar products in years past, both of which fared far worse.A tiny LED light on the front of the glasses indicates whether somebody is using them to record video. But it's easy to cover up, and experts have since warned that the specs could easily be used to invade the privacy of others.And as it turns out, that could also make them a useful tool in the arsenal of a terrorist."From a reconnaissance perspective, youre really getting a sense of the eyeline and eyesight and all the things that youre going to want to look out for if youre trying to plan an attack," strategic counterterrorism expert Sam Hunter told NBC News. "Its starting to get more and more into the footage of this is what it actually looks like and feels like when youre in that environment.""I would not be surprised if you see versions of them or folks using them for attack planning in the future, again because theyre so discreet in terms of capturing that footage," he added.Share This Article
    0 Commenti 0 condivisioni 124 Views
  • WEWORKREMOTELY.COM
    Fingerprint: Senior Android Engineer
    Who We Are Looking For:Key Responsibilities:Design, develop, and optimize native Android applications and libraries, focusing on security, scalability, and performance.Collaborate with cross-functional teams to implement security protocols and ensure that applications align with best security practices.Build developer tools that enhance productivity and streamline the development process.Contribute to the maintenance and advancement of security frameworks for mobile applications.Provide mentorship to junior team members and contribute to their technical development.Requirements:Education: Degree in Computer Science, Information Security, or equivalent experience.Experience:5+ years as a Senior Android Engineer (prior experience in a Senior Android role is required).Strong Information Security background, with practical experience in:Network protocols and secure coding practicesCryptography and security framework developmentProven experience in building Android applications from the ground up, with expertise in Kotlin and vanilla Android SDKs. Experience with Jetpack Compose is preferred.Experience with Android NDK is a plus.Contributions to Open Source projects on GitHub are a bonus.Offers vary depending on, but not limited to, relevant experience, education, certifications/licenses, skills, training, and market conditions.Due to regulatory and security reasons, theres a small number of countries where we cannot have Fingerprint teammates based. Additionally, because Fingerprint is an all-remote company and people can join our workforce from almost any country, we do not sponsor visas. Fingerprint teammates need to be authorized to work from their home location.We are dedicated to creating an inclusive work environment for everyone. We embrace and celebrate the unique experiences, perspectives and cultural backgrounds that each employee brings to our workplace. Fingerprint strives to foster an environment where our employees feel respected, valued and empowered, and our team members are at the forefront in helping us promote and sustain an inclusive workplace. We highly encourage people from underrepresented groups in tech to apply.
    0 Commenti 0 condivisioni 129 Views