• CoreWeave shares soar 19% after $2 billion debt offering

    CoreWeave shares popped more than 19% after the renter of artificial intelligence data centers announced a billion debt offering.
    #coreweave #shares #soar #after #billion
    CoreWeave shares soar 19% after $2 billion debt offering
    CoreWeave shares popped more than 19% after the renter of artificial intelligence data centers announced a billion debt offering. #coreweave #shares #soar #after #billion
    CoreWeave shares soar 19% after $2 billion debt offering
    www.cnbc.com
    CoreWeave shares popped more than 19% after the renter of artificial intelligence data centers announced a $2 billion debt offering.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • What it’s like to wear Google’s Gemini-powered AI glasses

    Google wants to give people access to its Gemini AI assistant with the blink of an eye: The company has struck a partnership with eyeglasses makers Warby Parker and Gentle Monster to make AI smart glasses, it announced at its Google I/O developer conference in Mountain View Tuesday. These glasses will be powered by Google’s new Android XR platform, and are expected to be released in 2026 at the earliest.

    To show what Gemini-powered smart glasses can do, Google has also built a limited number of prototype devices in partnership with Samsung. These glasses use a small display in the right lens to show live translations, directions and similar lightweight assistance. They also feature an integrated camera that gives Gemini a real-time view of your surroundings and can also be used to capture photos and videos.

    “Unlike Clark Kent, you can get superpowers when you put your glasses on,” joked Android XR GM and VP Shahram Izadi during Tuesday’s keynote presentation.Going hands-on

    Google demonstrated its prototype device to reporters Tuesday afternoon. Compared to a regular pair of glasses, Google’s AI device still features notably thicker temples. These house microphones, a touch interface for input, and a capture button to take photos.Despite all of that, the glasses do feel light and comfortable, similar to Meta’s Ray-Ban smart glasses.

    The Google glasses’ big difference compared to Meta’s reveals itself almost immediately after putting them on: At the center of the right lens is a small, rectangular see-through display. It doesn’t obstruct your view of the world when not actively in use. However, during the demo, I at times noticed a purple reflection from the waveguide that’s at the core of the display in the upper right corner of my field-of-view.

    Google’s AI assistant can be summoned with a simple touch gesture. Once active, Gemini automatically accesses the outward-facing camera of the glasses, which makes it possible to ask about anything you see. During my short demo, the assistant correctly described the content of a painting, identified its painter, and offered some information about books hand-selected by Google for the demo.

    In addition to AI assistance, the glasses can also be used for live translation and navigation. Google only showed the latter to members of the media. When in Google Maps mode, the glasses automatically display turn-by-turn walking directions while looking up. Look down, and the display includes a small, circular street map floating in front of you.

    The display itself looked bright and legible, even when showing multiple lines of text at a time. However, Google conducted these demos indoors; it’s unclear how bright sunlight will impact legibility. 

    Also unknown at this point is how long the batteries of such a device will last. Android XR glasses are designed for all-day wear, according to Izadi, but that doesn’t really tell us how many hours they can be used at a time.

    Lots of open questions

    Third-party apps were also notably absent from the demo. Izadi said Tuesday that glasses running Android XR will work with your phone, “giving you access to your apps while keeping your hands free.” How exactly that will work is unclear, as the display integrated into the prototype was too small to display the full UI of most apps. Most likely, Android XR will render apps in a simplified, device-optimized fashion, similar to the way apps show up on smart watches such as the Apple Watch and Google’s Android Wear devices.

    The emergence of these kinds of devices also raises more fundamental questions about privacy. The prototype device shown at Google’s event this week has an LED that’s supposed to signal to bystanders when it takes photos or records video, and an internal LED that signals to the wearer when footage is being captured.

    However, the LED doesn’t turn on on while Google’s Gemini assistant observes the world through the camera. According to a Google spokesperson, that’s because any video ingested this way is not being stored, but only temporarily used to make sense of the world. Bystanders, however, may not be as receptive to that distinction. They may assume that a device that can “see” the world at all times also continuously captures video.

    Lastly, it’s still unclear what Google’s vision for other form factors looks like. The company also announced plans to release a pair of tethered AR glasses in partnership with Chinese AR startup Xreal Tuesday. With displays in both eyes, that device will be able to render much more immersive experiences, and presumably emphasize entertainment and work applications over more basic assistance.

    In addition, Google’s roadmap for Android XR-powered devices includes glasses without any display at all. These are likely going to be similar to Meta’s Ray-Ban smart glasses, albeit with access to Google’s Gemini assistant instead of Meta’s AI. Omitting a display brings down the manufacturing costs of smart glasses, while also helping with an important goal: To make devices that look and feel familiar to anyone who has ever worn a pair of glasses.

    “We know that these need to be stylish glasses that you’ll want to wear all day,” Izadi said.
    #what #its #like #wear #googles
    What it’s like to wear Google’s Gemini-powered AI glasses
    Google wants to give people access to its Gemini AI assistant with the blink of an eye: The company has struck a partnership with eyeglasses makers Warby Parker and Gentle Monster to make AI smart glasses, it announced at its Google I/O developer conference in Mountain View Tuesday. These glasses will be powered by Google’s new Android XR platform, and are expected to be released in 2026 at the earliest. To show what Gemini-powered smart glasses can do, Google has also built a limited number of prototype devices in partnership with Samsung. These glasses use a small display in the right lens to show live translations, directions and similar lightweight assistance. They also feature an integrated camera that gives Gemini a real-time view of your surroundings and can also be used to capture photos and videos. “Unlike Clark Kent, you can get superpowers when you put your glasses on,” joked Android XR GM and VP Shahram Izadi during Tuesday’s keynote presentation.Going hands-on Google demonstrated its prototype device to reporters Tuesday afternoon. Compared to a regular pair of glasses, Google’s AI device still features notably thicker temples. These house microphones, a touch interface for input, and a capture button to take photos.Despite all of that, the glasses do feel light and comfortable, similar to Meta’s Ray-Ban smart glasses. The Google glasses’ big difference compared to Meta’s reveals itself almost immediately after putting them on: At the center of the right lens is a small, rectangular see-through display. It doesn’t obstruct your view of the world when not actively in use. However, during the demo, I at times noticed a purple reflection from the waveguide that’s at the core of the display in the upper right corner of my field-of-view. Google’s AI assistant can be summoned with a simple touch gesture. Once active, Gemini automatically accesses the outward-facing camera of the glasses, which makes it possible to ask about anything you see. During my short demo, the assistant correctly described the content of a painting, identified its painter, and offered some information about books hand-selected by Google for the demo. In addition to AI assistance, the glasses can also be used for live translation and navigation. Google only showed the latter to members of the media. When in Google Maps mode, the glasses automatically display turn-by-turn walking directions while looking up. Look down, and the display includes a small, circular street map floating in front of you. The display itself looked bright and legible, even when showing multiple lines of text at a time. However, Google conducted these demos indoors; it’s unclear how bright sunlight will impact legibility.  Also unknown at this point is how long the batteries of such a device will last. Android XR glasses are designed for all-day wear, according to Izadi, but that doesn’t really tell us how many hours they can be used at a time. Lots of open questions Third-party apps were also notably absent from the demo. Izadi said Tuesday that glasses running Android XR will work with your phone, “giving you access to your apps while keeping your hands free.” How exactly that will work is unclear, as the display integrated into the prototype was too small to display the full UI of most apps. Most likely, Android XR will render apps in a simplified, device-optimized fashion, similar to the way apps show up on smart watches such as the Apple Watch and Google’s Android Wear devices. The emergence of these kinds of devices also raises more fundamental questions about privacy. The prototype device shown at Google’s event this week has an LED that’s supposed to signal to bystanders when it takes photos or records video, and an internal LED that signals to the wearer when footage is being captured. However, the LED doesn’t turn on on while Google’s Gemini assistant observes the world through the camera. According to a Google spokesperson, that’s because any video ingested this way is not being stored, but only temporarily used to make sense of the world. Bystanders, however, may not be as receptive to that distinction. They may assume that a device that can “see” the world at all times also continuously captures video. Lastly, it’s still unclear what Google’s vision for other form factors looks like. The company also announced plans to release a pair of tethered AR glasses in partnership with Chinese AR startup Xreal Tuesday. With displays in both eyes, that device will be able to render much more immersive experiences, and presumably emphasize entertainment and work applications over more basic assistance. In addition, Google’s roadmap for Android XR-powered devices includes glasses without any display at all. These are likely going to be similar to Meta’s Ray-Ban smart glasses, albeit with access to Google’s Gemini assistant instead of Meta’s AI. Omitting a display brings down the manufacturing costs of smart glasses, while also helping with an important goal: To make devices that look and feel familiar to anyone who has ever worn a pair of glasses. “We know that these need to be stylish glasses that you’ll want to wear all day,” Izadi said. #what #its #like #wear #googles
    What it’s like to wear Google’s Gemini-powered AI glasses
    www.fastcompany.com
    Google wants to give people access to its Gemini AI assistant with the blink of an eye: The company has struck a partnership with eyeglasses makers Warby Parker and Gentle Monster to make AI smart glasses, it announced at its Google I/O developer conference in Mountain View Tuesday. These glasses will be powered by Google’s new Android XR platform, and are expected to be released in 2026 at the earliest. To show what Gemini-powered smart glasses can do, Google has also built a limited number of prototype devices in partnership with Samsung. These glasses use a small display in the right lens to show live translations, directions and similar lightweight assistance. They also feature an integrated camera that gives Gemini a real-time view of your surroundings and can also be used to capture photos and videos. “Unlike Clark Kent, you can get superpowers when you put your glasses on,” joked Android XR GM and VP Shahram Izadi during Tuesday’s keynote presentation. [Photo: Janko Roettgers] Going hands- (and eyes-) on Google demonstrated its prototype device to reporters Tuesday afternoon. Compared to a regular pair of glasses, Google’s AI device still features notably thicker temples. These house microphones, a touch interface for input, and a capture button to take photos.Despite all of that, the glasses do feel light and comfortable, similar to Meta’s Ray-Ban smart glasses. The Google glasses’ big difference compared to Meta’s reveals itself almost immediately after putting them on: At the center of the right lens is a small, rectangular see-through display. It doesn’t obstruct your view of the world when not actively in use. However, during the demo, I at times noticed a purple reflection from the waveguide that’s at the core of the display in the upper right corner of my field-of-view. Google’s AI assistant can be summoned with a simple touch gesture. Once active, Gemini automatically accesses the outward-facing camera of the glasses, which makes it possible to ask about anything you see. During my short demo, the assistant correctly described the content of a painting, identified its painter, and offered some information about books hand-selected by Google for the demo. In addition to AI assistance, the glasses can also be used for live translation and navigation. Google only showed the latter to members of the media. When in Google Maps mode, the glasses automatically display turn-by-turn walking directions while looking up. Look down, and the display includes a small, circular street map floating in front of you. The display itself looked bright and legible, even when showing multiple lines of text at a time. However, Google conducted these demos indoors; it’s unclear how bright sunlight will impact legibility.  Also unknown at this point is how long the batteries of such a device will last. Android XR glasses are designed for all-day wear, according to Izadi, but that doesn’t really tell us how many hours they can be used at a time. Lots of open questions Third-party apps were also notably absent from the demo. Izadi said Tuesday that glasses running Android XR will work with your phone, “giving you access to your apps while keeping your hands free.” How exactly that will work is unclear, as the display integrated into the prototype was too small to display the full UI of most apps. Most likely, Android XR will render apps in a simplified, device-optimized fashion, similar to the way apps show up on smart watches such as the Apple Watch and Google’s Android Wear devices. The emergence of these kinds of devices also raises more fundamental questions about privacy. The prototype device shown at Google’s event this week has an LED that’s supposed to signal to bystanders when it takes photos or records video, and an internal LED that signals to the wearer when footage is being captured. However, the LED doesn’t turn on on while Google’s Gemini assistant observes the world through the camera. According to a Google spokesperson, that’s because any video ingested this way is not being stored, but only temporarily used to make sense of the world. Bystanders, however, may not be as receptive to that distinction. They may assume that a device that can “see” the world at all times also continuously captures video. Lastly, it’s still unclear what Google’s vision for other form factors looks like. The company also announced plans to release a pair of tethered AR glasses in partnership with Chinese AR startup Xreal Tuesday. With displays in both eyes, that device will be able to render much more immersive experiences, and presumably emphasize entertainment and work applications over more basic assistance. In addition, Google’s roadmap for Android XR-powered devices includes glasses without any display at all. These are likely going to be similar to Meta’s Ray-Ban smart glasses, albeit with access to Google’s Gemini assistant instead of Meta’s AI. Omitting a display brings down the manufacturing costs of smart glasses, while also helping with an important goal: To make devices that look and feel familiar to anyone who has ever worn a pair of glasses. “We know that these need to be stylish glasses that you’ll want to wear all day,” Izadi said.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Good or Bad? This System Records Entire Sporting Matches, But Highlights Just Your Child

    Over a decade ago, the prolific writer and artist Chris Ware highlighted the negative effects smartphones were having on our society. His spot-on cover for the January 6th, 2014 cover of The New Yorker was titled "All Together Now:"In 2014, the video quality of smartphones was pretty good. Today it's practically broadcast level, and it shows in how we behave at concerts and sporting events. The next time you're attending your child's game, look around: How many of the parents are taking in the game, versus being wholly focused on recording their child's individual performance?A Texas-based company called Trace believes they have the solution, though it's a good deal bulkier than a smartphone. Their Trace camera is something you set up at midfield on the sidelines, assuming you've got access, on its included* four-foot tripod. The tripod has the ability to extend to a height of sixteen feet, and you use the included trio of sandbags to stabilize the thing. The camera captures a panoramic view of the entire pitch. The company's PlayerFocus AI technology then spits out a video that tracks only your child, like this:*The cost is not cheap. First off, you can't buy the camera; you lease it, along with the tripod and sandbags, for an indefinite period. In order to use it, you must have a subscription with the company, which starts at /month or /year. For that Basic level subscription, you cannot download any of the videos, and can only watch the last five matches by streaming them through the company's app. If you step up to the company's /year Pro subscription, you can access all matches recorded and download them.If you can muster seven families or more that want to share the camera for a Team subscription, you get the lease for free, and each family pays their own subscription fee, choosing either the Basic or Pro, independently of the other families.Here's the question: Would this object increase or decrease the sad friction that already exists at children's sporting events, with apoplectic parents getting into it with coaches and other parents? I also wonder about the logistics of seven families coordinating a Team subscription and assigning responsibility for the camera toting and set-up.That said, I could see this tool being useful for coaching staff.
    #good #bad #this #system #records
    Good or Bad? This System Records Entire Sporting Matches, But Highlights Just Your Child
    Over a decade ago, the prolific writer and artist Chris Ware highlighted the negative effects smartphones were having on our society. His spot-on cover for the January 6th, 2014 cover of The New Yorker was titled "All Together Now:"In 2014, the video quality of smartphones was pretty good. Today it's practically broadcast level, and it shows in how we behave at concerts and sporting events. The next time you're attending your child's game, look around: How many of the parents are taking in the game, versus being wholly focused on recording their child's individual performance?A Texas-based company called Trace believes they have the solution, though it's a good deal bulkier than a smartphone. Their Trace camera is something you set up at midfield on the sidelines, assuming you've got access, on its included* four-foot tripod. The tripod has the ability to extend to a height of sixteen feet, and you use the included trio of sandbags to stabilize the thing. The camera captures a panoramic view of the entire pitch. The company's PlayerFocus AI technology then spits out a video that tracks only your child, like this:*The cost is not cheap. First off, you can't buy the camera; you lease it, along with the tripod and sandbags, for an indefinite period. In order to use it, you must have a subscription with the company, which starts at /month or /year. For that Basic level subscription, you cannot download any of the videos, and can only watch the last five matches by streaming them through the company's app. If you step up to the company's /year Pro subscription, you can access all matches recorded and download them.If you can muster seven families or more that want to share the camera for a Team subscription, you get the lease for free, and each family pays their own subscription fee, choosing either the Basic or Pro, independently of the other families.Here's the question: Would this object increase or decrease the sad friction that already exists at children's sporting events, with apoplectic parents getting into it with coaches and other parents? I also wonder about the logistics of seven families coordinating a Team subscription and assigning responsibility for the camera toting and set-up.That said, I could see this tool being useful for coaching staff. #good #bad #this #system #records
    Good or Bad? This System Records Entire Sporting Matches, But Highlights Just Your Child
    www.core77.com
    Over a decade ago, the prolific writer and artist Chris Ware highlighted the negative effects smartphones were having on our society. His spot-on cover for the January 6th, 2014 cover of The New Yorker was titled "All Together Now:"In 2014, the video quality of smartphones was pretty good. Today it's practically broadcast level, and it shows in how we behave at concerts and sporting events. The next time you're attending your child's game, look around: How many of the parents are taking in the game, versus being wholly focused on recording their child's individual performance?A Texas-based company called Trace believes they have the solution, though it's a good deal bulkier than a smartphone. Their Trace camera is something you set up at midfield on the sidelines, assuming you've got access, on its included* four-foot tripod. The tripod has the ability to extend to a height of sixteen feet, and you use the included trio of sandbags to stabilize the thing. The camera captures a panoramic view of the entire pitch. The company's PlayerFocus AI technology then spits out a video that tracks only your child, like this:*The cost is not cheap. First off, you can't buy the camera; you lease it, along with the tripod and sandbags, for an indefinite period. In order to use it, you must have a subscription with the company, which starts at $25/month or $180/year. For that Basic level subscription, you cannot download any of the videos, and can only watch the last five matches by streaming them through the company's app. If you step up to the company's $300/year Pro subscription, you can access all matches recorded and download them.If you can muster seven families or more that want to share the camera for a Team subscription, you get the lease for free, and each family pays their own subscription fee, choosing either the Basic or Pro, independently of the other families.Here's the question: Would this object increase or decrease the sad friction that already exists at children's sporting events, with apoplectic parents getting into it with coaches and other parents? I also wonder about the logistics of seven families coordinating a Team subscription and assigning responsibility for the camera toting and set-up.That said, I could see this tool being useful for coaching staff.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard

    Even though Apple DID make a 5.3mm iPad Pro, let’s just remember that they didn’t have as much success slimming down their keyboards. Remember the infamous butterfly keys on the MacBooks of 2015? Well, after that travesty, Apple just went back to what worked – relying on good-old scissor switches that resulted in a marginally thicker, yet more practical and functional device. At just 8 millimeters thick, the ‘mikefive’ doesn’t look like it should be real—let alone functional. But it is. It’s 37% slimmer than Apple’s Magic Keyboard and still manages to pack in 1.8mm of key travel, tactile mechanical switches, wireless connectivity, and a cyberpunk-grade metal chassis. It’s the kind of gear that looks like it came from a movie prop shop specializing in dystopian sci-fi—only it types better than most of what’s on your desk.
    The mastermind behind it is Reddit user dynam1keNL, an industrial product designer who clearly decided the mechanical keyboard rabbit hole didn’t go deep enough. Built from scratch around Kailh’s obscure PG1316 laptop switches, the mikefive is what happens when obsessive design meets precision engineering. The custom transparent caps, the CNC-machined aluminum chassis, the completely flush PCB layout—it’s all been meticulously dialed in to create a keyboard that doesn’t just challenge what a mechanical board can be. It redefines it.
    Designer: dynam1keNL

    To start, the mikefive is built around the Kailh PG1316 switches—a lesser-known, laptop-style mechanical switch that isn’t just slim, it’s shockingly tactile. These things have a travel of 1.8mm, and despite their wafer-thin profile, they pack a surprisingly aggressive tactile bump. It’s a bold choice that bucks the trend of soft, mushy low-profile inputs. You feel every keypress, and not in a nagging way—more like a firm handshake with every letter.

    The design language leans into a sleek cyberpunk aesthetic: a CNC-machined aluminum chassis that feels like it belongs on the deck of a spaceship, paired with transparent keycaps that hint at the internals while catching ambient light like crystal circuitry. The keycaps are proprietary, square-shaped, and clear, subtly marked with mold letters from the inside.

    What makes this keyboard doubly fascinating is that it isn’t some big brand prototype or crowdfunding darling, it’s a homebrew labor of love from a designer-engineer with a background in industrial product design. The entire board, including its impossibly compact controller and 301230 battery, is laid out like a masterclass in minimalism. The switch mounts directly to the PCB with no pins poking through, letting the board itself double as the bottom plate.

    Both halves of the unibody design are angled 15 degrees for comfort, creating a total ergonomic tilt of 30 degrees. The bottom edge has been subtly shaved down near the thumb cluster to avoid interference. And while there’s a slight warp on one end from hotplate soldering, it’s barely a blemish on an otherwise refined build.

    Despite its experimental nature, the keyboard’s wireless connectionworks flawlessly, even with the metal chassis surrounding the internals. Clever placement of the Bluetooth antenna and strategic removal of ground planes near it help the signal escape.
    Typing on the mikefive is a tactile revelation. Coming from linear switches, the force required by the PG1316s might be a shock, but there’s a tactile clarity here that’s hard to ignore. And when you realize your wrists aren’t straining after hours of use, the ultra-low height starts to feel like a long-overdue standard.
    The post Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard first appeared on Yanko Design.
    #ultraslim #cyberpunk #keyboard #slimmer #than
    Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard
    Even though Apple DID make a 5.3mm iPad Pro, let’s just remember that they didn’t have as much success slimming down their keyboards. Remember the infamous butterfly keys on the MacBooks of 2015? Well, after that travesty, Apple just went back to what worked – relying on good-old scissor switches that resulted in a marginally thicker, yet more practical and functional device. At just 8 millimeters thick, the ‘mikefive’ doesn’t look like it should be real—let alone functional. But it is. It’s 37% slimmer than Apple’s Magic Keyboard and still manages to pack in 1.8mm of key travel, tactile mechanical switches, wireless connectivity, and a cyberpunk-grade metal chassis. It’s the kind of gear that looks like it came from a movie prop shop specializing in dystopian sci-fi—only it types better than most of what’s on your desk. The mastermind behind it is Reddit user dynam1keNL, an industrial product designer who clearly decided the mechanical keyboard rabbit hole didn’t go deep enough. Built from scratch around Kailh’s obscure PG1316 laptop switches, the mikefive is what happens when obsessive design meets precision engineering. The custom transparent caps, the CNC-machined aluminum chassis, the completely flush PCB layout—it’s all been meticulously dialed in to create a keyboard that doesn’t just challenge what a mechanical board can be. It redefines it. Designer: dynam1keNL To start, the mikefive is built around the Kailh PG1316 switches—a lesser-known, laptop-style mechanical switch that isn’t just slim, it’s shockingly tactile. These things have a travel of 1.8mm, and despite their wafer-thin profile, they pack a surprisingly aggressive tactile bump. It’s a bold choice that bucks the trend of soft, mushy low-profile inputs. You feel every keypress, and not in a nagging way—more like a firm handshake with every letter. The design language leans into a sleek cyberpunk aesthetic: a CNC-machined aluminum chassis that feels like it belongs on the deck of a spaceship, paired with transparent keycaps that hint at the internals while catching ambient light like crystal circuitry. The keycaps are proprietary, square-shaped, and clear, subtly marked with mold letters from the inside. What makes this keyboard doubly fascinating is that it isn’t some big brand prototype or crowdfunding darling, it’s a homebrew labor of love from a designer-engineer with a background in industrial product design. The entire board, including its impossibly compact controller and 301230 battery, is laid out like a masterclass in minimalism. The switch mounts directly to the PCB with no pins poking through, letting the board itself double as the bottom plate. Both halves of the unibody design are angled 15 degrees for comfort, creating a total ergonomic tilt of 30 degrees. The bottom edge has been subtly shaved down near the thumb cluster to avoid interference. And while there’s a slight warp on one end from hotplate soldering, it’s barely a blemish on an otherwise refined build. Despite its experimental nature, the keyboard’s wireless connectionworks flawlessly, even with the metal chassis surrounding the internals. Clever placement of the Bluetooth antenna and strategic removal of ground planes near it help the signal escape. Typing on the mikefive is a tactile revelation. Coming from linear switches, the force required by the PG1316s might be a shock, but there’s a tactile clarity here that’s hard to ignore. And when you realize your wrists aren’t straining after hours of use, the ultra-low height starts to feel like a long-overdue standard. The post Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard first appeared on Yanko Design. #ultraslim #cyberpunk #keyboard #slimmer #than
    Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard
    www.yankodesign.com
    Even though Apple DID make a 5.3mm iPad Pro, let’s just remember that they didn’t have as much success slimming down their keyboards. Remember the infamous butterfly keys on the MacBooks of 2015? Well, after that travesty, Apple just went back to what worked – relying on good-old scissor switches that resulted in a marginally thicker, yet more practical and functional device. At just 8 millimeters thick, the ‘mikefive’ doesn’t look like it should be real—let alone functional. But it is. It’s 37% slimmer than Apple’s Magic Keyboard and still manages to pack in 1.8mm of key travel, tactile mechanical switches, wireless connectivity, and a cyberpunk-grade metal chassis. It’s the kind of gear that looks like it came from a movie prop shop specializing in dystopian sci-fi—only it types better than most of what’s on your desk. The mastermind behind it is Reddit user dynam1keNL, an industrial product designer who clearly decided the mechanical keyboard rabbit hole didn’t go deep enough. Built from scratch around Kailh’s obscure PG1316 laptop switches, the mikefive is what happens when obsessive design meets precision engineering. The custom transparent caps, the CNC-machined aluminum chassis, the completely flush PCB layout—it’s all been meticulously dialed in to create a keyboard that doesn’t just challenge what a mechanical board can be. It redefines it. Designer: dynam1keNL To start, the mikefive is built around the Kailh PG1316 switches—a lesser-known, laptop-style mechanical switch that isn’t just slim, it’s shockingly tactile. These things have a travel of 1.8mm, and despite their wafer-thin profile, they pack a surprisingly aggressive tactile bump. It’s a bold choice that bucks the trend of soft, mushy low-profile inputs. You feel every keypress, and not in a nagging way—more like a firm handshake with every letter. The design language leans into a sleek cyberpunk aesthetic: a CNC-machined aluminum chassis that feels like it belongs on the deck of a spaceship, paired with transparent keycaps that hint at the internals while catching ambient light like crystal circuitry. The keycaps are proprietary, square-shaped, and clear, subtly marked with mold letters from the inside. What makes this keyboard doubly fascinating is that it isn’t some big brand prototype or crowdfunding darling, it’s a homebrew labor of love from a designer-engineer with a background in industrial product design. The entire board, including its impossibly compact controller and 301230 battery, is laid out like a masterclass in minimalism. The switch mounts directly to the PCB with no pins poking through, letting the board itself double as the bottom plate. Both halves of the unibody design are angled 15 degrees for comfort, creating a total ergonomic tilt of 30 degrees. The bottom edge has been subtly shaved down near the thumb cluster to avoid interference. And while there’s a slight warp on one end from hotplate soldering (just old-fashioned human error), it’s barely a blemish on an otherwise refined build. Despite its experimental nature, the keyboard’s wireless connection (courtesy of a nicenano v2) works flawlessly, even with the metal chassis surrounding the internals. Clever placement of the Bluetooth antenna and strategic removal of ground planes near it help the signal escape. Typing on the mikefive is a tactile revelation. Coming from linear switches, the force required by the PG1316s might be a shock, but there’s a tactile clarity here that’s hard to ignore. And when you realize your wrists aren’t straining after hours of use, the ultra-low height starts to feel like a long-overdue standard. The post Ultra-Slim Cyberpunk Keyboard is 37% slimmer than Apple’s own Magic Keyboard first appeared on Yanko Design.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • The Enhanced Games Has a Date, a Host City, and a Drug-Fueled World Record

    The Enhanced Games, where athletes are allowed to take performance-enhancing drugs, will host its first event in May. One “enhanced” former Olympic swimmer has already broken the 50-meter freestyle record.
    #enhanced #games #has #date #host
    The Enhanced Games Has a Date, a Host City, and a Drug-Fueled World Record
    The Enhanced Games, where athletes are allowed to take performance-enhancing drugs, will host its first event in May. One “enhanced” former Olympic swimmer has already broken the 50-meter freestyle record. #enhanced #games #has #date #host
    The Enhanced Games Has a Date, a Host City, and a Drug-Fueled World Record
    www.wired.com
    The Enhanced Games, where athletes are allowed to take performance-enhancing drugs, will host its first event in May. One “enhanced” former Olympic swimmer has already broken the 50-meter freestyle record.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books

    A summer reading insert recommended made-up titles by real authors such as Isabel Allende and Delia Owens. The Sun-Times and The Philadelphia Inquirer have apologized.
    #aigenerated #reading #list #chicago #suntimes
    A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books
    A summer reading insert recommended made-up titles by real authors such as Isabel Allende and Delia Owens. The Sun-Times and The Philadelphia Inquirer have apologized. #aigenerated #reading #list #chicago #suntimes
    A.I.-Generated Reading List in Chicago Sun-Times Recommends Nonexistent Books
    www.nytimes.com
    A summer reading insert recommended made-up titles by real authors such as Isabel Allende and Delia Owens. The Sun-Times and The Philadelphia Inquirer have apologized.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • After Google IO’s big AI reveals, my iPhone has never felt dumber

    Macworld

    I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever.

    Actions speak louder than words

    The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’tdisplay nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either.

    If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors.

    Google’s demos at IO showed the true power of AI.Foundry

    Apple’s tight grip will only suffocate innovation

    A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web.

    Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps.

    Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry

    Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness, but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users.

    Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones.

    Beyond the basics

    Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time. 

    For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals. 

    Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure.

    Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry

    The wrong path

    During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more. 

    While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way.

    Apple is running out of time

    iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases. 

    Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry

    Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything.

    The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts. 

    Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory.
    #after #google #ios #big #reveals
    After Google IO’s big AI reveals, my iPhone has never felt dumber
    Macworld I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever. Actions speak louder than words The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’tdisplay nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either. If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors. Google’s demos at IO showed the true power of AI.Foundry Apple’s tight grip will only suffocate innovation A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web. Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps. Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness, but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users. Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones. Beyond the basics Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time.  For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals.  Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure. Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry The wrong path During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more.  While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way. Apple is running out of time iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases.  Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything. The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts.  Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory. #after #google #ios #big #reveals
    After Google IO’s big AI reveals, my iPhone has never felt dumber
    www.macworld.com
    Macworld I can’t believe I’m about to state this, but I’m considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn’t just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won’t be able to replicate anytime soon, if ever. Actions speak louder than words The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple’s canned Apple Intelligence demo at last year’s WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn’t (primarily, at least) display nonexistent concepts and mockups or pre-record the event. It likely didn’t make promises it can’t keep, either. If you have high AI hopes for WWDC25, I’d like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple’s tough position—given how last year’s AI vision crumbled before its eyes—but I’d like to think a corporation of that size could’ve acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors. Google’s demos at IO showed the true power of AI.Foundry Apple’s tight grip will only suffocate innovation A few months ago, Apple added ChatGPT to Siri’s toolbox, letting users rely on OpenAI’s models for complex queries. While a welcome addition, it’s unintuitive to use. In many cases, you need to explicitly ask Apple’s virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web. Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google’s ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps. Gemini is already light-years ahead of Siri—and it’s only getting better.Foundry Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It’s similar to the revamped Siri with on-screen context awareness (that Apple is reportedly rebuilding from scratch), but much more powerful. While, yes, it’s still just a prototype, Google has seemingly delivered on last year’s promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users. Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple’s system restrictions would throttle them. Third-party developers can’t control the OS, so Google will never be able to build the same comprehensive tools for iPhones. Beyond the basics Google’s AI plan doesn’t strictly revolve around its Gemini chatbot delivering information. It’s creating a new computing experience powered by artificial intelligence. Google’s AI is coming to Search and Chrome to assist with web browsing in real time.  For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals.  Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple’s private, primarily on-device approach, most users care about the results, not the underlying infrastructure. Google’s Try It On mode will use AI to show how something will look before you buy it.Foundry The wrong path During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more.  While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need—a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can’t read iOS notifications and interact with the system in the same way. Apple is running out of time iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google’s vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other’s strengths and applying the needed changes to appease their respective user bases.  Apple Intelligence had priomise but Apple has failed to deliver its most ambitious features.Foundry Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment—until the AI boom changed everything. The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it’s changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users’ text and voice prompts.  Google is slowly setting this new standard with Android, and if Apple can’t keep up with the times, the iPhone’s relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn’t act fast, Siri will be a distant memory.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Report: Apple Intelligence to open up to developers at WWDC

    How much would your company pay for a focused AI system built by your own experts, one that brings in the best of all available genAI systems within an integration that works via iCloud Private Compute and Apple Intelligence?

    I ask this because, if you stretch your neck and use your imagination just a little bit, you can just about see signs that suggest Apple has a chance to build a system like this. It just needs to loosen up its approach to third-party AI development, make its own APIs more widely available to external developers, and create a highly secure cloud-based system with which to handle complex requests.

    Apple has at least one of those components already — and may soon have two more.

    Apple to introduce Apple Intelligence SDK at WWDC?

    That’s one way to look at the most recent news to emanate from behind the doors of Cupertino via the man who seems to have becomeApple’s Chief Rumors Officer, Bloomberg’s Mark Gurman. He says Apple intends to open up access to its own Apple Intelligence APIs at WWDC next month, making it possible for developers to integrate the AI systems it has built into their own applications. 

    The company is expected to announce a new software development kitin iOS 19 that will make it easier for app developers to add Apple Intelligence features such as Writing Tools or Genmoji to their software.  

    Developers can already integrate some Apple Intelligence features into their apps, but this SDK would permit them to create new AI features using Apple’s own AI frameworks. At present, they must use third-party models to accomplish this. 

    What about Private Cloud Compute?

    The thing is, once you recognize that some iOS applications are themselves front doors to other vendors’ genAI systems, the move may give Apple a double whammy — a chance to offer its highly secure AI solutions via the apps people choose to use, along with a route through which to provide additional solutions for use within those apps, potentially extending to access of some kind to its Private Cloud Compute systems. 

    The latter is a big deal. It means AI developers may be able to find some way to offer up their own solutions to Apple users, making them available via the privacy shield Private Cloud Compute provides. 

    What that means is the convenience of AI along with the privacy and security Apple provides. 

    Take that a couple of steps further down this particular episode of Pure Apple Speculation, and you can see that enterprise users might be pretty excited about that. It matters to business users in regulated industries who want to be able to use powerful AI services but do not want to leave their data at potential risk. 

    Steps on the journey

    I’m not saying Apple will be able to introduce anything quite like this at WWDC. 

    In recent months, stories emerging from inside Apple’s AI teams suggest things are far too chaotic and stressful for such a plan to be put in place. However, I do think that with the addition of an Apple Intelligence SDK and the steady deployment of Apple’s Private Cloud Compute servers, the opportunity to work in closer partnership with third-party AI developers exists.It may not be the easiest opportunity for Apple to embrace culturally and would to some extent signify how far behind Apple has fallen in some respects, but it will be a smart way to maintain hardware and software relevance and provide unique services to its customers. Working with others, Apple may be able to deliver a best-in-class AI you can safely use, privately and securely, without the data leaks. That’s an answer to a question that has slowed AI adoption. 

    While you have to be very wary when any Big Tech firm promises privacy, and don’t want to find out subsequently that all of this was no more than a velvet glove treatment to gently usher us into a ghastly automated dystopia, there is a need for private AI. Educators, health services, and businesspeople particularly need it, and by working in a positive and constructive way with third-party AI developers, Apple may now have a strategically functional way in which to deliver it.

    We shall see.

    You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    #report #apple #intelligence #open #developers
    Report: Apple Intelligence to open up to developers at WWDC
    How much would your company pay for a focused AI system built by your own experts, one that brings in the best of all available genAI systems within an integration that works via iCloud Private Compute and Apple Intelligence? I ask this because, if you stretch your neck and use your imagination just a little bit, you can just about see signs that suggest Apple has a chance to build a system like this. It just needs to loosen up its approach to third-party AI development, make its own APIs more widely available to external developers, and create a highly secure cloud-based system with which to handle complex requests. Apple has at least one of those components already — and may soon have two more. Apple to introduce Apple Intelligence SDK at WWDC? That’s one way to look at the most recent news to emanate from behind the doors of Cupertino via the man who seems to have becomeApple’s Chief Rumors Officer, Bloomberg’s Mark Gurman. He says Apple intends to open up access to its own Apple Intelligence APIs at WWDC next month, making it possible for developers to integrate the AI systems it has built into their own applications.  The company is expected to announce a new software development kitin iOS 19 that will make it easier for app developers to add Apple Intelligence features such as Writing Tools or Genmoji to their software.   Developers can already integrate some Apple Intelligence features into their apps, but this SDK would permit them to create new AI features using Apple’s own AI frameworks. At present, they must use third-party models to accomplish this.  What about Private Cloud Compute? The thing is, once you recognize that some iOS applications are themselves front doors to other vendors’ genAI systems, the move may give Apple a double whammy — a chance to offer its highly secure AI solutions via the apps people choose to use, along with a route through which to provide additional solutions for use within those apps, potentially extending to access of some kind to its Private Cloud Compute systems.  The latter is a big deal. It means AI developers may be able to find some way to offer up their own solutions to Apple users, making them available via the privacy shield Private Cloud Compute provides.  What that means is the convenience of AI along with the privacy and security Apple provides.  Take that a couple of steps further down this particular episode of Pure Apple Speculation, and you can see that enterprise users might be pretty excited about that. It matters to business users in regulated industries who want to be able to use powerful AI services but do not want to leave their data at potential risk.  Steps on the journey I’m not saying Apple will be able to introduce anything quite like this at WWDC.  In recent months, stories emerging from inside Apple’s AI teams suggest things are far too chaotic and stressful for such a plan to be put in place. However, I do think that with the addition of an Apple Intelligence SDK and the steady deployment of Apple’s Private Cloud Compute servers, the opportunity to work in closer partnership with third-party AI developers exists.It may not be the easiest opportunity for Apple to embrace culturally and would to some extent signify how far behind Apple has fallen in some respects, but it will be a smart way to maintain hardware and software relevance and provide unique services to its customers. Working with others, Apple may be able to deliver a best-in-class AI you can safely use, privately and securely, without the data leaks. That’s an answer to a question that has slowed AI adoption.  While you have to be very wary when any Big Tech firm promises privacy, and don’t want to find out subsequently that all of this was no more than a velvet glove treatment to gently usher us into a ghastly automated dystopia, there is a need for private AI. Educators, health services, and businesspeople particularly need it, and by working in a positive and constructive way with third-party AI developers, Apple may now have a strategically functional way in which to deliver it. We shall see. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon. #report #apple #intelligence #open #developers
    Report: Apple Intelligence to open up to developers at WWDC
    www.computerworld.com
    How much would your company pay for a focused AI system built by your own experts, one that brings in the best of all available genAI systems within an integration that works via iCloud Private Compute and Apple Intelligence? I ask this because, if you stretch your neck and use your imagination just a little bit, you can just about see signs that suggest Apple has a chance to build a system like this. It just needs to loosen up its approach to third-party AI development, make its own APIs more widely available to external developers, and create a highly secure cloud-based system with which to handle complex requests. Apple has at least one of those components already — and may soon have two more. Apple to introduce Apple Intelligence SDK at WWDC? That’s one way to look at the most recent news to emanate from behind the doors of Cupertino via the man who seems to have become (at least to some factions in Apple’s inner circle) Apple’s Chief Rumors Officer (CRO), Bloomberg’s Mark Gurman. He says Apple intends to open up access to its own Apple Intelligence APIs at WWDC next month, making it possible for developers to integrate the AI systems it has built into their own applications.  The company is expected to announce a new software development kit (SDK) in iOS 19 that will make it easier for app developers to add Apple Intelligence features such as Writing Tools or Genmoji to their software. (Gurman’s story says that Apple will at first give developers access to smaller AI tools that can run on the device.)  Developers can already integrate some Apple Intelligence features into their apps, but this SDK would permit them to create new AI features using Apple’s own AI frameworks. At present, they must use third-party models to accomplish this.  What about Private Cloud Compute? The thing is, once you recognize that some iOS applications are themselves front doors to other vendors’ genAI systems, the move may give Apple a double whammy — a chance to offer its highly secure AI solutions via the apps people choose to use, along with a route through which to provide additional solutions for use within those apps, potentially extending to access of some kind to its Private Cloud Compute systems.  The latter is a big deal. It means AI developers may be able to find some way to offer up their own solutions to Apple users, making them available via the privacy shield Private Cloud Compute provides.  What that means is the convenience of AI along with the privacy and security Apple provides.  Take that a couple of steps further down this particular episode of Pure Apple Speculation, and you can see that enterprise users might be pretty excited about that. It matters to business users in regulated industries who want to be able to use powerful AI services but do not want to leave their data at potential risk.  Steps on the journey I’m not saying Apple will be able to introduce anything quite like this at WWDC.  In recent months, stories emerging from inside Apple’s AI teams suggest things are far too chaotic and stressful for such a plan to be put in place. However, I do think that with the addition of an Apple Intelligence SDK and the steady deployment of Apple’s Private Cloud Compute servers, the opportunity to work in closer partnership with third-party AI developers exists. (Don’t ignore those servers — Apple is, after all, working with huge manufacturers to produce them in quantity, which implies it expects to see them being widely used.) It may not be the easiest opportunity for Apple to embrace culturally and would to some extent signify how far behind Apple has fallen in some respects, but it will be a smart way to maintain hardware and software relevance and provide unique services to its customers. Working with others, Apple may be able to deliver a best-in-class AI you can safely use, privately and securely, without the data leaks. That’s an answer to a question that has slowed AI adoption.  While you have to be very wary when any Big Tech firm promises privacy (as Siri once showed us), and don’t want to find out subsequently that all of this was no more than a velvet glove treatment to gently usher us into a ghastly automated dystopia, there is a need for private AI. Educators, health services, and businesspeople particularly need it, and by working in a positive and constructive way with third-party AI developers, Apple may now have a strategically functional way in which to deliver it. We shall see. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Roundtables: A New Look at AI’s Energy Use

    Big Tech’s appetite for energy is growing rapidly as adoption of AI accelerates. But just how much energy does even a single AI query use? And what does it mean for the climate? Hear from MIT Technology Review editor in chief Mat Honan, senior climate reporter Casey Crownhart, and AI reporter James O’Donnell as they explore AI’s energy demands now and in the future.

    Speakers: Mat Honan, editor in chief, Casey Crownhart, climate reporter, and James O’Donnell, AI reporter.

    Related Coverage:

    Power Hungry: AI and our energy future

    We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

    Everything you need to know about estimating AI’s energy and emissions burden

    These four charts sum up the state of AI and energy
    #roundtables #new #look #ais #energy
    Roundtables: A New Look at AI’s Energy Use
    Big Tech’s appetite for energy is growing rapidly as adoption of AI accelerates. But just how much energy does even a single AI query use? And what does it mean for the climate? Hear from MIT Technology Review editor in chief Mat Honan, senior climate reporter Casey Crownhart, and AI reporter James O’Donnell as they explore AI’s energy demands now and in the future. Speakers: Mat Honan, editor in chief, Casey Crownhart, climate reporter, and James O’Donnell, AI reporter. Related Coverage: Power Hungry: AI and our energy future We did the math on AI’s energy footprint. Here’s the story you haven’t heard. Everything you need to know about estimating AI’s energy and emissions burden These four charts sum up the state of AI and energy #roundtables #new #look #ais #energy
    Roundtables: A New Look at AI’s Energy Use
    www.technologyreview.com
    Big Tech’s appetite for energy is growing rapidly as adoption of AI accelerates. But just how much energy does even a single AI query use? And what does it mean for the climate? Hear from MIT Technology Review editor in chief Mat Honan, senior climate reporter Casey Crownhart, and AI reporter James O’Donnell as they explore AI’s energy demands now and in the future. Speakers: Mat Honan, editor in chief, Casey Crownhart, climate reporter, and James O’Donnell, AI reporter. Related Coverage: Power Hungry: AI and our energy future We did the math on AI’s energy footprint. Here’s the story you haven’t heard. Everything you need to know about estimating AI’s energy and emissions burden These four charts sum up the state of AI and energy
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Save up to $500 on Apple's M4 Mac mini during Memorial Day Sale

    Memorial Day price drops are knocking up to off Apple's M4 and M4 Pro Mac mini range, delivering prices from up to on Apple's latest Mac mini.Both Amazon and B&H are discounting the current Mac mini product line by up to reflecting the lowest prices on numerous upgraded configurations. Prices start at just for the M4 range, with the steepest discounts available on the M4 Pro models.Latest M4 Mac mini discounts Continue Reading on AppleInsider | Discuss on our Forums
    #save #apple039s #mac #mini #during
    Save up to $500 on Apple's M4 Mac mini during Memorial Day Sale
    Memorial Day price drops are knocking up to off Apple's M4 and M4 Pro Mac mini range, delivering prices from up to on Apple's latest Mac mini.Both Amazon and B&H are discounting the current Mac mini product line by up to reflecting the lowest prices on numerous upgraded configurations. Prices start at just for the M4 range, with the steepest discounts available on the M4 Pro models.Latest M4 Mac mini discounts Continue Reading on AppleInsider | Discuss on our Forums #save #apple039s #mac #mini #during
    Save up to $500 on Apple's M4 Mac mini during Memorial Day Sale
    appleinsider.com
    Memorial Day price drops are knocking up to $500 off Apple's M4 and M4 Pro Mac mini range, delivering prices from $529.Save up to $500 on Apple's latest Mac mini.Both Amazon and B&H are discounting the current Mac mini product line by up to $500, reflecting the lowest prices on numerous upgraded configurations. Prices start at just $529 for the M4 range, with the steepest discounts available on the M4 Pro models.Latest M4 Mac mini discounts Continue Reading on AppleInsider | Discuss on our Forums
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
CGShares https://cgshares.com