• TECHREPORT.COM
    Ex-employee Accuses Apple of Illegal Surveillance In a Lawsuit
    Key TakeawaysAmar Bhakta, who worked as a digital advertiser in Apple has filed a lawsuit against the company for illegally surveilling employees and suppressing free speech.In the lawsuit, he claimed that Apple monitors the personal devices of the employees, even when they are off-duty.Apple has denied these allegations. A spokesperson even said that the company actually encourages its employees to speak up about their wages, hours, and working conditions.Apple has been sued by an ex-employee for allegedly spying on employee devices. The plaintiff is Amar Bhakta, who worked as a digital advertiser at Apple since 2020.In the lawsuit filed in a California state court, Bhakta claims that Apples work environment is a prison yard. Employees are constantly subjected to physical, video, and electronic surveillance.He further explained that the company monitors everything an employee does on their personal devices. They are required to link their personal iCloud accounts to their work systems and allow the company to install monitoring software on their personal iPhones.This way the company constantly checks the employees emails, location, photos, and videos, even when the employee is off the clock. This surveillance also extends to the devices in the employees home office.In addition to that, Bhakta also accused the company of suppressing free speech. For example, he was barred from talking about his work on a podcast and was also asked to remove his job position at Apple from his LinkedIn profile which is now making it difficult for him to find another job.These accusations, if true, violate the labor code in California. So Bhakta is seeking compensation for his own troubles as well as changes in the company so that no other employee has to go through the same ordeal.In addition to paying the damages owed to Bhakta, Apple might also have to pay a separate penalty to the state.Apple has denied all these accusations. A spokesperson named Josh Rosenstock actually spoke out against the suppression of free speech allegation and said that the company actually trains its employees on their rights to freely talk about their wages, hours, and working conditions.However, this isnt the first time that the company has been accused of poorly treating its employees. Outten & Golden, the law firm thats representing Bhakta is also representing two other women who accused the company of discriminating against its female employees.According to the lawsuit filed in June, the company paid lower wages to its female employees compared to their male counterparts. Approximately 12,000 female employees across various departments (in its California branch) such as engineering, marketing, and customer support are said to be affected by this discrimination.Apple had denied these allegations as well.Add Techreport to Your Google News Feed Get the latest updates, trends, and insights delivered straight to your fingertips. Subscribe now! Subscribe now Krishi is an eager Tech Journalist and content writer for both B2B and B2C, with a focus on making the process of purchasing software easier for businesses and enhancing their online presence and SEO.Krishi has a special skill set in writing about technology news, creating educational content on customer relationship management (CRM) software, and recommending project management tools that can help small businesses increase their revenue.Alongside his writing and blogging work, Krishi's other hobbies include studying the financial markets and cricket. View all articles by Krishi Chowdhary Our editorial processThe Tech Reporteditorial policyis centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written byreal authors.
    0 Comments 0 Shares 58 Views
  • This bird-inspired drone is more energy efficient and proficient at complex flight maneuvers
    Artificial Flight: Modern drones have become pretty advanced, but they are very energy-inefficient. European researchers decided to take inspiration from birds to develop a new type of drone that could consume less power and mimic its real-life counterpart's complex movements. Researchers at the cole Polytechnique Fdrale de Lausanne (EPFL) in Switzerland developed a Robotic Avian-inspired Vehicle for Multiple Environments (RAVEN) by adding hind limbs to a fixed-wing design. The final result is a drone that takes off quicker than a traditional model and performs complex maneuvers that mimic real birds.The researchers note that birds inspired the invention of airplanes, but even modern planes are far from perfect flying machines. A bird can quickly go from walking to flying into the air and back again instantly. It does not need a runway or a launcher. Engineers have yet to reproduce this kind of biological versatility in artificial designs.The RAVEN study aims to maximize "gait diversity" while minimizing mass. The bird-inspired multifunctional legs allow the drone to rapidly take off and fly, walk on the ground, and hop over small obstacles. The researchers compared RAVEN to a real raven to display the drone's capabilities.Taking off with a jump can "substantially" contribute to the initial flight speed and is more energy efficient than the legless design of traditional drones. The EPFL researchers developed the perfect robotic legs using mathematical models, computer simulations, and experimental iterations.The project's final result is an optimal balance between the artificial limbs' mechanical complexity and the RAVEN drone's overall weight (0.62kg). The hind limbs keep heavier components close to the drone's main body, while springs and motors mimic a bird's powerful tendons and muscles. // Related StoriesEvolution and biology solved the flight problem ages ago with birds, but the researchers had to work hard to try and emulate the same versatility in a drone design. The study suggests that RAVEN's multifunctional robot legs can expand deployment opportunities compared to traditional fixed-wing aircraft. Thanks to their ability to autonomously take off, the new drones could operate in complex terrains and other hazardous conditions to get their job done.
    0 Comments 0 Shares 28 Views
  • WWW.TECHSPOT.COM
    A USB-C cable can hide a lot of malicious hardware, CT scan shows
    Why it matters: A simple search on Amazon or any other online retailer will show that consumers have a wide selection of USB-C cables, with options ranging from just a few dollars to over $100. The price primarily depends on things like length, quality of construction, compliance with various parts of the USB-C spec, and branding. While USB-C may be the most flexible connection for digital devices, it's also confusing to the point where it pays off to learn about the intricacies of this ever-evolving standard. Its primary goal is to simplify things so consumers can use a single cable for data, audio, video, and power delivery.However, a cable's specs are not always the same, and packing is often vague regarding the cable's capabilities. There is also the potential for USB-C cables to hide malicious circuitry that compromises the security of your device.At first glance, USB-C cables look mostly the same. However, some feature active circuitry inside. Thanks to equipment like Lumafield's Neptune Industrial X-Ray CT Scanner, we can see the internal design of something like Apple's $129 Thunderbolt 4 USB-C cable is much more complex than a $11.69 Amazon Basics cable, which doesn't even use all the pins on the USB-C connector (below).More recently, Lumafield investigated an O.MG USB-C cable. It is another example of the sophisticated electronics you can hide inside a normal-looking USB-C connector. However, the O.MG cable is a niche product created by Mike Grover and designed for security research and to increase awareness about potentially malicious hardware users could find in the wild.John Bruner of Lumafield says that many people who saw the previous scans were understandably worried that what looks like an ordinary USB connector could easily contain hardware that can inject malicious code, log keystrokes, and extract personal data. // Related StoriesNotably, the O.MG cable features a clever design that could make it easy to overlook such circuitry when using standard inspection methods. While an ordinary 2D X-ray scan would quickly reveal the antenna and microcontroller, it took a 3D scan and fiddling with visualization parameters to spot a second set of wires going to a second die stacked on top of the microcontroller (below).Bruner believes CT scanning is quickly becoming an important security tool for verifying the integrity of hardware during manufacturing before it has a chance of causing harm to individuals, companies, and critical infrastructure. An undetected supply chain attack can lead to serious consequences, as shown by the recent example of exploding pagers used in Lebanon to target Hezbollah leaders.Fortunately, the average consumer doesn't need to worry about explosives inside their cables, and products like the O.MG cable are usually too expensive for the general public, with these specialty devices going for up to $200. Even the EvilCrow Wind cable, a more affordable alternative that hides a powerful ESP32-S3 SoC with Wi-Fi and Bluetooth connectivity, still costs over $60.Still, Bruner recommends using certified USB-C cables and avoiding public USB charging ports when possible.
    0 Comments 0 Shares 30 Views
  • WWW.DIGITALTRENDS.COM
    Windows 11 Recall officially comes to Intel and AMD
    MicrosoftMicrosoft is finally expanding support for the Recall AI feature to Copilot+ PCs running Intel and AMD processors after the function has returned from a bevy of issues.The company made Recall available to Copilot+ PCs exclusively running Qualcomm processors in a late-November Windows 11 update, giving Windows Insiders in the Dev Channel access to the AI feature that take snapshots of your PC so you can search and look up aspects of your device in the future.Recommended VideosAfter several mishaps with the Recall feature, including an issue where the function was not properly saving snapshots, the feature now appears stable enough to work on a wider range of Copilot+ PCs. Intel- and AMD-powered devices will receive the latest version of Recall as a software update on Friday. This 26120.2510 (KB5048780) update is also available only for the Windows Insiders Dev Channel. Despite prior privacy concerns surrounding the feature, Microsoft has been very intent in how Recall works on a device. While the models that make the feature work will install on your PC with the update, you must manually enable the snapshots function for Recall to work. Additionally, you can set the duration a device will save and delete snapshots. Finally, the feature does not record vital information within snapshots, such as credit card details, passwords, and personal ID numbers.RelatedThe update also includes a number of security updates to fortify the feature. Recall now requires Windows Hello facial recognition to confirm your identity before accessing snapshots. Additionally, you also need to use or install BitLocker and Secure Boot to use in conjunction with the feature.Microsoft is also highlighting the Click to do feature within Recall, which allows you to click an aspect of a snapshot in order to activate it into something functional on your desktop, such as copying text or saving images. The feature works by using the Windows key + mouse click.Recall has come a long way from first being announced much earlier this year. It was intended for a preview release in June, but the various controversies led to the feature being retracted from release and then repeatedly delayed.Editors Recommendations
    0 Comments 0 Shares 43 Views
  • WWW.DIGITALTRENDS.COM
    Is Conclave streaming? Find out when the Oscar contender heads to Peacock
    One of the years biggest Oscar contenders heads to streaming before the end of 2024. ConclaveFriday, December 13.Based on Robert Harris bestselling novel, Conclaveis a thriller about the secretive process of selecting a new pope. After the pope unexpectedly dies, the College of Cardinals gathers under one roof for a papal conclave led by Thomas Cardinal Lawrence (Ralph Fiennes). During the deliberation, Cardinal Lawrence discovers a series of troubling secrets that, if made public, would ruin the Catholic Church.Recommended VideosBesides Fiennes,Conclavestars Stanley Tucci as Aldo Cardinal Bellini, John Lithgow as Joseph Cardinal Tremblay,Sergio Castellitto as Goffredo Cardinal Tedesco, and Isabella Rossellini as Sister Agnes.Please enable Javascript to view this contentEdward Berger, the director of the Oscar-winningAll Quiet on the Western Front, helmsConclavefrom a screenplay by Peter Straughan.CONCLAVE - Official Trailer 2 [HD] - Only In Theaters October 25Released theatrically in October, Conclavehas been a modest hit for Focus Features, generating $37 million worldwide on a $20 million budget.RelatedConclavehas received a positive reception, with many critics believing the thriller will be a contender at the 2025 Oscars. When describing the slow-burn thriller, Alex Welch of Digital Trends said, Conclave is never anything but absolutely gripping, and that is thanks in no small part to Fiennes lead performance. Its one of the best any actor has given so far this year.Fiennes is a shoo-in for a Best Actor nomination. Conclave will almost certainly receive a Best Picture nomination, especially after being named one of the top 10 films of 2024 by the American Film Institute and National Board of Review. Other potential categories for Conclaveto receive recognition in are Supporting Actor, Supporting Actress, Best Director, and Adapted Screenplay.StreamConclaveon Peacock on December 13.Editors Recommendations
    0 Comments 0 Shares 43 Views
  • WWW.WSJ.COM
    Palantir, Anduril Partner to Advance AI for National Security
    The two companies said they plan to utilize Andurils softwares and systems to secure large-scale data retention and distribution.
    0 Comments 0 Shares 41 Views
  • WWW.WSJ.COM
    Super Micro Computer Granted Exceptional Extension to Publish Delayed Annual Report
    Super Micro Computer said it has been granted an exceptional extension from Nasdaq that would allow it to file its latest annual report through Feb. 25, 2025.
    0 Comments 0 Shares 38 Views
  • WWW.WSJ.COM
    Czech Philharmonic Review: Bohemia Comes to Carnegie Hall
    With star soloists Yo-Yo Ma, Gil Shaham and Daniil Trifonov, the orchestras three-night residency as part of the Year of Czech Music featured performances of Dvok, Mahler, Smetana and Janek, rendered with distinctive artistic character under the baton of Semyon Bychkov.
    0 Comments 0 Shares 38 Views
  • WWW.WSJ.COM
    Untitled Art Miami Beach 2024 Review: Serious Fun
    The annual fair included works that were aptly exuberant given its seaside setting, as well as weightier, more contemplative art.
    0 Comments 0 Shares 39 Views
  • ARSTECHNICA.COM
    Googles Genie 2 world model reveal leaves more questions than answers
    Making a command out of your wish? Googles Genie 2 world model reveal leaves more questions than answers Long-term persistence, real-time interactions remain huge hurdles for AI worlds. Kyle Orland Dec 6, 2024 6:09 pm | 8 A sample of some of the best-looking Genie 2 worlds Google wants to show off. Credit: Google Deepmind A sample of some of the best-looking Genie 2 worlds Google wants to show off. Credit: Google Deepmind Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn March, Google showed off its first Genie AI model. After training on thousands of hours of 2D run-and-jump video games, the model could generate halfway-passable, interactive impressions of those games based on generic images or text descriptions.Nine months later, this week's reveal of the Genie 2 model expands that idea into the realm of fully 3D worlds, complete with controllable third- or first-person avatars. Google's announcement talks up Genie 2's role as a "foundational world model" that can create a fully interactive internal representation of a virtual environment. That could allow AI agents to train themselves in synthetic but realistic environments, Google says, forming an important stepping stone on the way to artificial general intelligence.But while Genie 2 shows just how much progress Google's Deepmind team has achieved in the last nine months, the limited public information about the model thus far leaves a lot of questions about how close we are to these foundational model worlds being useful for anything but some short but sweet demos.How long is your memory?Much like the original 2D Genie model, Genie 2 starts from a single image or text description and then generates subsequent frames of video based on both the previous frames and fresh input from the user (such as a movement direction or "jump"). Google says it trained on a "large-scale video dataset" to achieve this, but it doesn't say just how much training data was necessary compared to the 30,000 hours of footage used to train the first Genie.Short GIF demos on the Google DeepMind promotional page show Genie 2 being used to animate avatars ranging from wooden puppets to intricate robots to a boat on the water. Simple interactions shown in those GIFs demonstrate those avatars busting balloons, climbing ladders, and shooting exploding barrels without any explicit game engine describing those interactions. Those Genie 2-generated pyramids will still be there in 30 seconds. But in five minutes? Credit: Google Deepmind Perhaps the biggest advance claimed by Google here is Genie 2's "long horizon memory." This feature allows the model to remember parts of the world as they come out of view and then render them accurately as they come back into the frame based on avatar movement. This kind of persistence has proven to be a persistent problem for video generation models like Sora, which OpenAI said in February "do[es] not always yield correct changes in object state" and can develop "incoherencies... in long duration samples."The "long horizon" part of "long horizon memory" is perhaps a little overzealous here, though, as Genie 2 only "maintains a consistent world for up to a minute," with "the majority of examples shown lasting [10 to 20 seconds]." Those are definitely impressive time horizons in the world of AI video consistency, but it's pretty far from what you'd expect from any other real-time game engine. Imagine entering a town in a Skyrim-style RPG, then coming back five minutes later to find that the game engine had forgotten what that town looks like and generated a completely different town from scratch instead.What are we prototyping, exactly?Perhaps for this reason, Google suggests Genie 2 as it stands is less useful for creating a complete game experience and more to "rapidly prototype diverse interactive experiences" or to turn "concept art and drawings... into fully interactive environments."The ability to transform static "concept art" into lightly interactive "concept videos" could definitely be useful for visual artists brainstorming ideas for new game worlds. However, these kinds of AI-generated samples might be less useful for prototyping actual game designs that go beyond the visual. "What would this bird look like as a paper airplane?" is a sample Genie 2 use case presented by Google, but not really the heart of game prototyping. Google Deepmind"What would this bird look like as a paper airplane?" is a sample Genie 2 use case presented by Google, but not really the heart of game prototyping.Google Deepmind It would look like this, by the way... Google DeepmindIt would look like this, by the way...Google Deepmind"What would this bird look like as a paper airplane?" is a sample Genie 2 use case presented by Google, but not really the heart of game prototyping.Google DeepmindIt would look like this, by the way...Google DeepmindOn Bluesky, British game designer Sam Barlow (Silent Hill: Shattered Memories, Her Story) points out how game designers often use a process called whiteboxing to lay out the structure of a game world as simple white boxes well before the artistic vision is set. The idea, he says, is to "prove out and create a gameplay-first version of the game that we can lock so that art can come in and add expensive visuals to the structure. We build in lo-fi because it allows us to focus on these issues and iterate on them cheaply before we are too far gone to correct."Generating elaborate visual worlds using a model like Genie 2 before designing that underlying structure feels a bit like putting the cart before the horse. The process almost seems designed to generate generic, "asset flip"-style worlds with AI-generated visuals papered over generic interactions and architecture.As podcaster Ryan Zhao put it on Bluesky, "The design process has gone wrong when what you need to prototype is 'what if there was a space.'"Gotta go fastWhen Google revealed the first version of Genie earlier this year, it also published a detailed research paper outlining the specific steps taken behind the scenes to train the model and how that model generated interactive videos. No such research paper has been published detailing Genie 2's process, leaving us guessing at some important details.One of the most important of these details is model speed. The first Genie model generated its world at roughly one frame per second, a rate that was orders of magnitude slower than would be tolerably playable in real time. For Genie 2, Google only says that "the samples in this blog post are generated by an undistilled base model, to show what is possible. We can play a distilled version in real-time with a reduction in quality of the outputs."Reading between the lines, it sounds like the full version of Genie 2 operates at something well below the real-time interactions implied by those flashy GIFs. It's unclear how much "reduction in quality" is necessary to get a diluted version of the model to real-time controls, but given the lack of examples presented by Google, we have to assume that reduction is significant. Oasis' AI-generated Minecraft clone shows great potential, but still has a lot of rough edges, so to speak. Credit: Oasis Real-time, interactive AI video generation isn't exactly a pipe dream. Earlier this year, AI model maker Decart and hardware maker Etched published the Oasis model, showing off a human-controllable, AI-generated video clone of Minecraft that runs at a full 20 frames per second. However, that 500 million parameter model was trained on millions of hours of footage of a single, relatively simple game, and focused exclusively on the limited set of actions and environmental designs inherent to that game.When Oasis launched, its creators fully admitted the model "struggles with domain generalization," showing how "realistic" starting scenes had to be reduced to simplistic Minecraft blocks to achieve good results. And even with those limitations, it's not hard to find footage of Oasis degenerating into horrifying nightmare fuel after just a few minutes of play. What started as a realistic-looking soldier in this Genie 2 demo degenerates into this blobby mess just seconds later. Credit: Google Deepmind We can already see similar signs of degeneration in the extremely short GIFs shared by the Genie team, such as an avatar's dream-like fuzz during high-speed movement or NPCs that quickly fade into undifferentiated blobs at a short distance. That's not a great sign for a model whose "long memory horizon" is supposed to be a key feature.A learning crche for other AI agents? From this image, Genie 2 could generate a useful training environment for an AI agent and a simple "pick a door" task. Credit: Google Deepmind Genie 2 seems to be using individual game frames as the basis for the animations in its model. But it also seems able to infer some basic information about the objects in those frames and craft interactions with those objects in the way a game engine might.Google's blog post shows how a SIMA agent inserted into a Genie 2 scene can follow simple instructions like "enter the red door" or "enter the blue door," controlling the avatar via simple keyboard and mouse inputs. That could potentially make Genie 2 environment a great test bed for AI agents in various synthetic worlds.Google claims rather grandiosely that Genie 2 puts it on "the path to solving a structural problem of training embodied agents safely while achieving the breadth and generality required to progress towards [artificial general intelligence]." Whether or not that ends up being true, recent research shows that agent learning gained from foundational models can be effectively applied to real-world robotics.Using this kind of AI model to create worlds for other AI models to learn in might be the ultimate use case for this kind of technology. But when it comes to the dream of an AI model that can create generic 3D worlds that a human player could explore in real time, we might not be as close as it seems.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 8 Comments
    0 Comments 0 Shares 31 Views