• Exploring Grey Literature on SSRN

    SSRN

    Exploring Grey Literature on SSRN

    There has been a growing discussion in the academic community surrounding the concept of grey literature, a broad term that encompasses documents, data, research, and materials created outside of the traditional pathways of academic publication, and often for non-academic audiences. This work contributes to the information ecosystem by providing sources of knowledge that are timely and broad, filling in gaps in research and offering original data and insights that extend beyond the typical channels for academic publishing.
    In practice, what does this look like? Grey literature includes various reports, conference proceedings, datasets, legal transcripts, working papers, dissertations, blog posts, policy documents, and a wide range of other work that expands the knowledge base and enriches modern scholarship.
    The Purpose of Grey Literature
    Traditional academic publishing involves peer review, a lengthy publication process, and such documents may not be widely accessible to those without academic library privileges. Grey literature may be released more quickly and is often directly accessible for all, allowing current research within a field to be shared in real-time. This provides the opportunity for dissemination of ongoing research, recent developments in policy and government, and relevant reports that help inform the academic discourse of the present and influence the development of research in the future.
    Grey literature provides other benefits beyond its timeliness. The structure of the work itself provides the opportunity to fill in research and knowledge gaps. This can be through the release of up-to-date data, case studies, and reports that don’t fall within the scope of academic publications, or it can present preliminary findings that serve as complementary to previously published works. Grey literature captures perspectives that have a wider scope and therefore rounds out the scholarly record.
    The accessibility and relevance of grey literature allows the work to have significance outside the world of academia. It helps inform policies, programs, and future academic research. Grey literature takes research and data and translates it into real-world impact.
    Joshua Tucker, professor and researcher at NYU, shared his grey literature on SSRN. He was pleased to see that his report  – which would not be included in traditional academic publications – had a presence on SSRN, generating additional attention and citations it wouldn’t have received otherwise. He shared with SSRN that, “This review of the literature was never intended to be an academic article. It was a report commissioned by the Hewlett Foundation, and the Hewlett Foundation put it on its website. I thought people in the policy community were going to see it on the Hewlett website, but I’d love for people to see it in the academic community. I thought that maybe we’d get a few citations out of it, andto throw it up on SSRN, on a whim. And now it’s been downloaded over 40,000 times and continues to be cited all the time. In that sense,filled this really nice niche: we had something that we didn’t write to be an academic publicationweren’t going to send to journals. It’s a nice home for things that don’t have a natural fit.“
    Grey Literature’s Place on SSRN
    As a repository for early-stage research, SSRN provides a home for research in all stages of development. Work submitted to SSRN is made available quickly, creating an outlet for real-time research.
    SSRN is a platform where research of many mediums can thrive. We define research broadly: presentations, infographics, case studies, white papers, proceedings, working papers, datasets, conference proceedings, informational guides, reports and more. They exist side-by-side, all with the objective of sharing knowledge at a global level. Because of this, SSRN is a great place for grey literature of all kinds. Even research that doesn’t take a traditional academic pathway can thrive on SSRN.
    The Future of Research
    The world changes quickly – with technology, faster than ever – and SSRN allows the flow of research to keep up with the changing times. The relevance and impact of research matters, and grey literature is a big contributor to that.
    SSRN is where it starts; submit your research in real-time, bring work of any scale and any format, and contribute to the future of this evolving research and scholarship landscape.
    Want to share your grey literature or other early-stage research on SSRN? Click here to submit your research today.
    #exploring #grey #literature #ssrn
    Exploring Grey Literature on SSRN
    SSRN Exploring Grey Literature on SSRN There has been a growing discussion in the academic community surrounding the concept of grey literature, a broad term that encompasses documents, data, research, and materials created outside of the traditional pathways of academic publication, and often for non-academic audiences. This work contributes to the information ecosystem by providing sources of knowledge that are timely and broad, filling in gaps in research and offering original data and insights that extend beyond the typical channels for academic publishing. In practice, what does this look like? Grey literature includes various reports, conference proceedings, datasets, legal transcripts, working papers, dissertations, blog posts, policy documents, and a wide range of other work that expands the knowledge base and enriches modern scholarship. The Purpose of Grey Literature Traditional academic publishing involves peer review, a lengthy publication process, and such documents may not be widely accessible to those without academic library privileges. Grey literature may be released more quickly and is often directly accessible for all, allowing current research within a field to be shared in real-time. This provides the opportunity for dissemination of ongoing research, recent developments in policy and government, and relevant reports that help inform the academic discourse of the present and influence the development of research in the future. Grey literature provides other benefits beyond its timeliness. The structure of the work itself provides the opportunity to fill in research and knowledge gaps. This can be through the release of up-to-date data, case studies, and reports that don’t fall within the scope of academic publications, or it can present preliminary findings that serve as complementary to previously published works. Grey literature captures perspectives that have a wider scope and therefore rounds out the scholarly record. The accessibility and relevance of grey literature allows the work to have significance outside the world of academia. It helps inform policies, programs, and future academic research. Grey literature takes research and data and translates it into real-world impact. Joshua Tucker, professor and researcher at NYU, shared his grey literature on SSRN. He was pleased to see that his report  – which would not be included in traditional academic publications – had a presence on SSRN, generating additional attention and citations it wouldn’t have received otherwise. He shared with SSRN that, “This review of the literature was never intended to be an academic article. It was a report commissioned by the Hewlett Foundation, and the Hewlett Foundation put it on its website. I thought people in the policy community were going to see it on the Hewlett website, but I’d love for people to see it in the academic community. I thought that maybe we’d get a few citations out of it, andto throw it up on SSRN, on a whim. And now it’s been downloaded over 40,000 times and continues to be cited all the time. In that sense,filled this really nice niche: we had something that we didn’t write to be an academic publicationweren’t going to send to journals. It’s a nice home for things that don’t have a natural fit.“ Grey Literature’s Place on SSRN As a repository for early-stage research, SSRN provides a home for research in all stages of development. Work submitted to SSRN is made available quickly, creating an outlet for real-time research. SSRN is a platform where research of many mediums can thrive. We define research broadly: presentations, infographics, case studies, white papers, proceedings, working papers, datasets, conference proceedings, informational guides, reports and more. They exist side-by-side, all with the objective of sharing knowledge at a global level. Because of this, SSRN is a great place for grey literature of all kinds. Even research that doesn’t take a traditional academic pathway can thrive on SSRN. The Future of Research The world changes quickly – with technology, faster than ever – and SSRN allows the flow of research to keep up with the changing times. The relevance and impact of research matters, and grey literature is a big contributor to that. SSRN is where it starts; submit your research in real-time, bring work of any scale and any format, and contribute to the future of this evolving research and scholarship landscape. Want to share your grey literature or other early-stage research on SSRN? Click here to submit your research today. #exploring #grey #literature #ssrn
    BLOG.SSRN.COM
    Exploring Grey Literature on SSRN
    SSRN Exploring Grey Literature on SSRN There has been a growing discussion in the academic community surrounding the concept of grey literature, a broad term that encompasses documents, data, research, and materials created outside of the traditional pathways of academic publication, and often for non-academic audiences. This work contributes to the information ecosystem by providing sources of knowledge that are timely and broad, filling in gaps in research and offering original data and insights that extend beyond the typical channels for academic publishing. In practice, what does this look like? Grey literature includes various reports, conference proceedings, datasets, legal transcripts, working papers, dissertations, blog posts, policy documents, and a wide range of other work that expands the knowledge base and enriches modern scholarship. The Purpose of Grey Literature Traditional academic publishing involves peer review, a lengthy publication process, and such documents may not be widely accessible to those without academic library privileges. Grey literature may be released more quickly and is often directly accessible for all, allowing current research within a field to be shared in real-time. This provides the opportunity for dissemination of ongoing research, recent developments in policy and government, and relevant reports that help inform the academic discourse of the present and influence the development of research in the future. Grey literature provides other benefits beyond its timeliness. The structure of the work itself provides the opportunity to fill in research and knowledge gaps. This can be through the release of up-to-date data, case studies, and reports that don’t fall within the scope of academic publications, or it can present preliminary findings that serve as complementary to previously published works. Grey literature captures perspectives that have a wider scope and therefore rounds out the scholarly record. The accessibility and relevance of grey literature allows the work to have significance outside the world of academia. It helps inform policies, programs, and future academic research. Grey literature takes research and data and translates it into real-world impact. Joshua Tucker, professor and researcher at NYU, shared his grey literature on SSRN. He was pleased to see that his report  – which would not be included in traditional academic publications – had a presence on SSRN, generating additional attention and citations it wouldn’t have received otherwise. He shared with SSRN that, “This review of the literature was never intended to be an academic article. It was a report commissioned by the Hewlett Foundation, and the Hewlett Foundation put it on its website. I thought people in the policy community were going to see it on the Hewlett website, but I’d love for people to see it in the academic community. I thought that maybe we’d get a few citations out of it, and [decided] to throw it up on SSRN, on a whim. And now it’s been downloaded over 40,000 times and continues to be cited all the time. In that sense, [SSRN] filled this really nice niche: we had something that we didn’t write to be an academic publication [and] weren’t going to send to journals. It’s a nice home for things that don’t have a natural fit.“ Grey Literature’s Place on SSRN As a repository for early-stage research, SSRN provides a home for research in all stages of development. Work submitted to SSRN is made available quickly, creating an outlet for real-time research. SSRN is a platform where research of many mediums can thrive. We define research broadly: presentations, infographics, case studies, white papers, proceedings, working papers, datasets, conference proceedings, informational guides, reports and more. They exist side-by-side, all with the objective of sharing knowledge at a global level. Because of this, SSRN is a great place for grey literature of all kinds. Even research that doesn’t take a traditional academic pathway can thrive on SSRN. The Future of Research The world changes quickly – with technology, faster than ever – and SSRN allows the flow of research to keep up with the changing times. The relevance and impact of research matters, and grey literature is a big contributor to that. SSRN is where it starts; submit your research in real-time, bring work of any scale and any format, and contribute to the future of this evolving research and scholarship landscape. Want to share your grey literature or other early-stage research on SSRN? Click here to submit your research today.
    Like
    Love
    Wow
    Sad
    Angry
    239
    0 Комментарии 0 Поделились 0 предпросмотр
  • NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference.
    The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining.
    The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks.
    The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.
    On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale.
    On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round.
    These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market.
    These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain.
    The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value.
    The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro.
    Learn more about MLPerf benchmarks.
    #nvidia #blackwell #delivers #breakthrough #performance
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks. #nvidia #blackwell #delivers #breakthrough #performance
    BLOGS.NVIDIA.COM
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks.
    Like
    Love
    Wow
    Angry
    Sad
    94
    7 Комментарии 0 Поделились 0 предпросмотр
  • Bring Receipts: New NVIDIA AI Blueprint Detects Fraudulent Credit Card Transactions With Precision

    Editor’s note: This blog, originally published on October 28, 2024, has been updated.
    Financial losses from worldwide credit card transaction fraud are projected to reach more than billion over the next decade.
    The new NVIDIA AI Blueprint for financial fraud detection can help combat this burgeoning epidemic — using accelerated data processing and advanced algorithms to improve AI’s ability to detect and prevent credit card transaction fraud.
    Launched this week at the Money20/20 financial services conference, the blueprint provides a reference example for financial institutions to identify subtle patterns and anomalies in transaction data based on user behavior to improve accuracy and reduce false positives compared with traditional methods.
    It shows developers how to build a financial fraud detection workflow by providing reference code, deployment tools and a reference architecture.
    Companies can streamline the migration of their fraud detection workflows from traditional compute to accelerated compute using the NVIDIA AI Enterprise software platform and NVIDIA accelerated computing. The NVIDIA AI Blueprint is available for customers to run on Amazon Web Services, with availability coming soon on Dell Technologies and Hewlett Packard Enterprise. Customers can also use the blueprint through service offerings from NVIDIA partners including Cloudera, EXL, Infosys and SHI International.

    Businesses embracing comprehensive machine learningtools and strategies can observe up to an estimated 40% improvement in fraud detection accuracy, boosting their ability to identify and stop fraudsters faster and mitigate harm.
    As such, leading financial organizations like American Express and Capital One have been using AI to build proprietary solutions that mitigate fraud and enhance customer protection.
    The new AI Blueprint accelerates model training and inference, and demonstrates how these components can be wrapped into a single, easy-to-use software offering, powered by NVIDIA AI.
    Currently optimized for credit card transaction fraud, the blueprint could be adapted for use cases such as new account fraud, account takeover and money laundering.
    Using Accelerated Computing and Graph Neural Networks for Fraud Detection
    Traditional data science pipelines lack the compute acceleration to handle the massive data volumes required for effective fraud detection. ML models like XGBoost are effective for detecting anomalies in individual transactions but fall short when fraud involves complex networks of linked accounts and devices.
    Helping address these gaps, NVIDIA RAPIDS — part of the NVIDIA CUDA-X collection of microservices, libraries, tools and technologies — enables payment companies to speed up data processing and transform raw data into powerful features at scale. These companies can fuel their AI models and integrate them with graph neural networksto uncover hidden, large-scale fraud patterns by analyzing relationships across different transactions, users and devices.
    The use of gradient-boosted decision trees — a type of ML algorithm — tapping into libraries such as XGBoost, has long been the standard for fraud detection.
    The new AI Blueprint for financial fraud detection enhances the XGBoost ML model with NVIDIA CUDA-X Data Science libraries including GNNs to generate embeddings that can be used as additional features to help reduce false positives.
    The GNN embeddings are fed into XGBoost to create and train a model that can then be orchestrated. In addition, NVIDIA Dynamo-Triton, formerly NVIDIA Triton Inference Server, boosts real-time inferencing while optimizing AI model throughput, latency and utilization.
    NVIDIA CUDA-X Data Science and Dynamo-Triton are included with NVIDIA AI Enterprise.
    Leading Financial Services Organizations Adopt AI
    During a time when many large North American financial institutions are reporting online or mobile fraud losses continue to increase, AI is helping to combat this trend.
    American Express, which began using AI to fight fraud in 2010, leverages fraud detection algorithms to monitor all customer transactions globally in real time, generating fraud decisions in just milliseconds. Using a combination of advanced algorithms, one of which tapped into the NVIDIA AI platform, American Express enhanced model accuracy, advancing the company’s ability to better fight fraud.
    European digital bank bunq uses generative AI and large language models to help detect fraud and money laundering. Its AI-powered transaction-monitoring system achieved nearly 100x faster model training speeds with NVIDIA accelerated computing.
    BNY announced in March 2024 that it became the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems, which will help build solutions that support fraud detection and other use cases.
    And now, systems integrators, software vendors and cloud service providers can integrate the new NVIDIA blueprint for fraud detection to boost their financial services applications and help keep customers’ money, identities and digital accounts safe.
    Explore the NVIDIA AI Blueprint for financial fraud detection and read this NVIDIA Technical Blog on supercharging fraud detection with GNNs.
    Learn more about AI for fraud detection by visiting the AI Summit at Money20/20, running this week in Amsterdam.
    See notice regarding software product information.
    #bring #receipts #new #nvidia #blueprint
    Bring Receipts: New NVIDIA AI Blueprint Detects Fraudulent Credit Card Transactions With Precision
    Editor’s note: This blog, originally published on October 28, 2024, has been updated. Financial losses from worldwide credit card transaction fraud are projected to reach more than billion over the next decade. The new NVIDIA AI Blueprint for financial fraud detection can help combat this burgeoning epidemic — using accelerated data processing and advanced algorithms to improve AI’s ability to detect and prevent credit card transaction fraud. Launched this week at the Money20/20 financial services conference, the blueprint provides a reference example for financial institutions to identify subtle patterns and anomalies in transaction data based on user behavior to improve accuracy and reduce false positives compared with traditional methods. It shows developers how to build a financial fraud detection workflow by providing reference code, deployment tools and a reference architecture. Companies can streamline the migration of their fraud detection workflows from traditional compute to accelerated compute using the NVIDIA AI Enterprise software platform and NVIDIA accelerated computing. The NVIDIA AI Blueprint is available for customers to run on Amazon Web Services, with availability coming soon on Dell Technologies and Hewlett Packard Enterprise. Customers can also use the blueprint through service offerings from NVIDIA partners including Cloudera, EXL, Infosys and SHI International. Businesses embracing comprehensive machine learningtools and strategies can observe up to an estimated 40% improvement in fraud detection accuracy, boosting their ability to identify and stop fraudsters faster and mitigate harm. As such, leading financial organizations like American Express and Capital One have been using AI to build proprietary solutions that mitigate fraud and enhance customer protection. The new AI Blueprint accelerates model training and inference, and demonstrates how these components can be wrapped into a single, easy-to-use software offering, powered by NVIDIA AI. Currently optimized for credit card transaction fraud, the blueprint could be adapted for use cases such as new account fraud, account takeover and money laundering. Using Accelerated Computing and Graph Neural Networks for Fraud Detection Traditional data science pipelines lack the compute acceleration to handle the massive data volumes required for effective fraud detection. ML models like XGBoost are effective for detecting anomalies in individual transactions but fall short when fraud involves complex networks of linked accounts and devices. Helping address these gaps, NVIDIA RAPIDS — part of the NVIDIA CUDA-X collection of microservices, libraries, tools and technologies — enables payment companies to speed up data processing and transform raw data into powerful features at scale. These companies can fuel their AI models and integrate them with graph neural networksto uncover hidden, large-scale fraud patterns by analyzing relationships across different transactions, users and devices. The use of gradient-boosted decision trees — a type of ML algorithm — tapping into libraries such as XGBoost, has long been the standard for fraud detection. The new AI Blueprint for financial fraud detection enhances the XGBoost ML model with NVIDIA CUDA-X Data Science libraries including GNNs to generate embeddings that can be used as additional features to help reduce false positives. The GNN embeddings are fed into XGBoost to create and train a model that can then be orchestrated. In addition, NVIDIA Dynamo-Triton, formerly NVIDIA Triton Inference Server, boosts real-time inferencing while optimizing AI model throughput, latency and utilization. NVIDIA CUDA-X Data Science and Dynamo-Triton are included with NVIDIA AI Enterprise. Leading Financial Services Organizations Adopt AI During a time when many large North American financial institutions are reporting online or mobile fraud losses continue to increase, AI is helping to combat this trend. American Express, which began using AI to fight fraud in 2010, leverages fraud detection algorithms to monitor all customer transactions globally in real time, generating fraud decisions in just milliseconds. Using a combination of advanced algorithms, one of which tapped into the NVIDIA AI platform, American Express enhanced model accuracy, advancing the company’s ability to better fight fraud. European digital bank bunq uses generative AI and large language models to help detect fraud and money laundering. Its AI-powered transaction-monitoring system achieved nearly 100x faster model training speeds with NVIDIA accelerated computing. BNY announced in March 2024 that it became the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems, which will help build solutions that support fraud detection and other use cases. And now, systems integrators, software vendors and cloud service providers can integrate the new NVIDIA blueprint for fraud detection to boost their financial services applications and help keep customers’ money, identities and digital accounts safe. Explore the NVIDIA AI Blueprint for financial fraud detection and read this NVIDIA Technical Blog on supercharging fraud detection with GNNs. Learn more about AI for fraud detection by visiting the AI Summit at Money20/20, running this week in Amsterdam. See notice regarding software product information. #bring #receipts #new #nvidia #blueprint
    BLOGS.NVIDIA.COM
    Bring Receipts: New NVIDIA AI Blueprint Detects Fraudulent Credit Card Transactions With Precision
    Editor’s note: This blog, originally published on October 28, 2024, has been updated. Financial losses from worldwide credit card transaction fraud are projected to reach more than $403 billion over the next decade. The new NVIDIA AI Blueprint for financial fraud detection can help combat this burgeoning epidemic — using accelerated data processing and advanced algorithms to improve AI’s ability to detect and prevent credit card transaction fraud. Launched this week at the Money20/20 financial services conference, the blueprint provides a reference example for financial institutions to identify subtle patterns and anomalies in transaction data based on user behavior to improve accuracy and reduce false positives compared with traditional methods. It shows developers how to build a financial fraud detection workflow by providing reference code, deployment tools and a reference architecture. Companies can streamline the migration of their fraud detection workflows from traditional compute to accelerated compute using the NVIDIA AI Enterprise software platform and NVIDIA accelerated computing. The NVIDIA AI Blueprint is available for customers to run on Amazon Web Services, with availability coming soon on Dell Technologies and Hewlett Packard Enterprise. Customers can also use the blueprint through service offerings from NVIDIA partners including Cloudera, EXL, Infosys and SHI International. Businesses embracing comprehensive machine learning (ML) tools and strategies can observe up to an estimated 40% improvement in fraud detection accuracy, boosting their ability to identify and stop fraudsters faster and mitigate harm. As such, leading financial organizations like American Express and Capital One have been using AI to build proprietary solutions that mitigate fraud and enhance customer protection. The new AI Blueprint accelerates model training and inference, and demonstrates how these components can be wrapped into a single, easy-to-use software offering, powered by NVIDIA AI. Currently optimized for credit card transaction fraud, the blueprint could be adapted for use cases such as new account fraud, account takeover and money laundering. Using Accelerated Computing and Graph Neural Networks for Fraud Detection Traditional data science pipelines lack the compute acceleration to handle the massive data volumes required for effective fraud detection. ML models like XGBoost are effective for detecting anomalies in individual transactions but fall short when fraud involves complex networks of linked accounts and devices. Helping address these gaps, NVIDIA RAPIDS — part of the NVIDIA CUDA-X collection of microservices, libraries, tools and technologies — enables payment companies to speed up data processing and transform raw data into powerful features at scale. These companies can fuel their AI models and integrate them with graph neural networks (GNNs) to uncover hidden, large-scale fraud patterns by analyzing relationships across different transactions, users and devices. The use of gradient-boosted decision trees — a type of ML algorithm — tapping into libraries such as XGBoost, has long been the standard for fraud detection. The new AI Blueprint for financial fraud detection enhances the XGBoost ML model with NVIDIA CUDA-X Data Science libraries including GNNs to generate embeddings that can be used as additional features to help reduce false positives. The GNN embeddings are fed into XGBoost to create and train a model that can then be orchestrated. In addition, NVIDIA Dynamo-Triton, formerly NVIDIA Triton Inference Server, boosts real-time inferencing while optimizing AI model throughput, latency and utilization. NVIDIA CUDA-X Data Science and Dynamo-Triton are included with NVIDIA AI Enterprise. Leading Financial Services Organizations Adopt AI During a time when many large North American financial institutions are reporting online or mobile fraud losses continue to increase, AI is helping to combat this trend. American Express, which began using AI to fight fraud in 2010, leverages fraud detection algorithms to monitor all customer transactions globally in real time, generating fraud decisions in just milliseconds. Using a combination of advanced algorithms, one of which tapped into the NVIDIA AI platform, American Express enhanced model accuracy, advancing the company’s ability to better fight fraud. European digital bank bunq uses generative AI and large language models to help detect fraud and money laundering. Its AI-powered transaction-monitoring system achieved nearly 100x faster model training speeds with NVIDIA accelerated computing. BNY announced in March 2024 that it became the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems, which will help build solutions that support fraud detection and other use cases. And now, systems integrators, software vendors and cloud service providers can integrate the new NVIDIA blueprint for fraud detection to boost their financial services applications and help keep customers’ money, identities and digital accounts safe. Explore the NVIDIA AI Blueprint for financial fraud detection and read this NVIDIA Technical Blog on supercharging fraud detection with GNNs. Learn more about AI for fraud detection by visiting the AI Summit at Money20/20, running this week in Amsterdam. See notice regarding software product information.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI

    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery.
    The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology.
    Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing.
    While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain.
    Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell.

    The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory.
    A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads.
    // Related Stories

    The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis.
    Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe."
    In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing.
    Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research.
    The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."
    #dell #nvidia #department #energy #join
    Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI
    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows." #dell #nvidia #department #energy #join
    WWW.TECHSPOT.COM
    Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI
    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operations (such as 16-bit or 8-bit) that enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."
    0 Комментарии 0 Поделились 0 предпросмотр
  • This $300 ‘Toothbrush’ Is the Worst Thing I’ve Ever Shoved in My Mouth

    Feno, the “smart electric toothbrush,” promised to take a two-minute toothbrushing routine and bring it down to 30 or even 20 seconds by swabbing each of my teeth at once. The Feno Smartbrush makes brushing faster, but in exchange it requires you to shove an entire mouthpiece in your piehole twice a day just to cut down on a total of three minutes of brushing time. If there is one thing to take away from this review, it’s that even if tech works, it doesn’t necessarily mean it’s better than what we already have. The “toothbrush” has been at the side of my bathroom sink for more than three weeks. It has technically saved me time. I would even go as far as to say it may do the job of a regular toothbrush with less time to get there. Still, given the choice, I would rather reach for my non-motorized, dentist-recommended toothbrush—if only because I know it works. After consulting with the companies and non-affiliated dentists, I’m more bemused that the Feno exists at all. This is a device that costs for the “Founder’s Edition” bundle. The company recently said it would increase the price to blaming tariffs for the rising cost. As the time of this publishing, that new price hasn’t yet materialized. The box comes with three canisters of brand-specific Feno Foam toothpaste. After you run out, you’ll need to spend to get an extra three canisters. Feno also recommends replacing the mouthpiece every three months, costing another Feno Smartbrush It may brush all your teeth for quicker cleans, but its too much of an unknown to recommend. Pros Cons My dentist gave me my last manual toothbrush for free. A tube of toothpaste was Despite the price, the company behind the smart toothbrush has one compelling pitch. If people were honest with themselves, most folks do not do the recommended amount of brushing. I fit into that camp for most of my life, until the point I went to my dentist and found I needed to have multiple caps on my molars, requiring I spend a hefty chunk of change for the privilege of having my teeth ground down to nubs. Since then, I’ve become very sensitive to the state of my pearly whites. I try to do the full two minutes of brushing and floss every day, but the Feno is supposed to help by shortening the brushing time and helpfully counting you down with an on-screen timer.

    My dentist was skeptical about the device’s claims, especially whether it was offering proper back-and-forth brushing technique. The American Dental Association has a Seal of Acceptance tested by the organization for products that are recommended by dentists. Neither Feno’s brush nor special toothpaste are on that list of products with the ADA seal. All I have to go on is Feno’s own claim that it’s doing what it needs to do to clean my teeth and remove plaque. For cleaning, the device makes use of pressure sensors alongside the mouthpiece’s 18,000 bristles, which Feno claims can hit 250 strokes per tooth in 20 seconds. It’s using a sweeping motion along the teeth, which dentists recommend when brushing, but there’s no published science to say the Feno is particularly better than other, similar devices. Feno told me the company has scientific research pertaining to how effective the device is, but it’s pending scientific review and won’t be available until some unknown date. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Feno revels in Silicon Valley startups’ worst habits. Every time you turn it on, the smart toothbrush bombards you with a QR code to download an app for all its controls, rather than including those on-device. The Feno toothbrush can incite the same gag reflex you know if you’ve ever played a contact sport requiring a mouth guard. The device is big enough that you have to open wide to fit the whole thing in at once. Brushing with the Feno is not an entirely passive experience, either. Feno’s founder, Dr. Kenny Brown, told me his company recommends moving the brush side to side while the mouthpiece actuates. On its highest settings, the Feno rattled my jaw and made my entire head shake like a marionette piloted by a mad puppeteer. With those speeds, I could feel the mouthpiece rubbing the inner cheek raw. At normal speeds, the Feno was uncomfortable but still usable without any pain. Feno also advises some gums may bleed if you haven’t been doing proper brushing technique for too long, but I didn’t find the bristles were any more abrasive to my gums than a regular toothbrush. The device running on default settings for 30 seconds seems engineered for most mouths.

    The company claims its device works with regular toothpaste, but when I plastered some gel to the bristles and stuck it in my mouth, it resulted in a sludgy mess at the bottom of the mouthpiece that took far longer to clean than the typical quick rinse. The foam toothpaste doesn’t leave your mouth full of the typical minty taste of fluoride and baking soda you normally associate with the feel of a clean mouth. As a point in favor of the Feno, that minty-fresh taste in your mouth isn’t actually indicative of clean teeth, according to Dr. Edmond Hewlett, a professor at UCLA’s School of Dentistry and a consumer advisor for the American Dental Association. Brown told me the company plans for updated toothpaste that adds a lingering minty taste in the mouth, as apparently I wasn’t the only one who spoke up on that lack of “clean” feeling. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Knowing my dentist appointments are near for even more fillings, the Feno would not only need to be good, but even better at cleaning my teeth than the typical electric brush. Even if I felt it may be hitting all my teeth, the device didn’t leave me feeling clean, not least because I had no control over it.

    Even if the Feno full-mouth toothbrush wasn’t uncomfortable, wasn’t expensive, didn’t require an app, and worked well with regular toothpaste, it would still be hard to make any claim it results in better cleaning than your regular toothbrush you buy from any local pharmacist. Using the smart toothbrush, you can’t tell what’s happening to your teeth. You can’t tell if it’s hitting all the nooks and crannies. That’s going to be a concern when everybody’s set of teeth is different. The Feno is supposedly designed so that all its bristles hit all different kinds of teeth at the correct 45-degree angle to the gums, but what really matters is if it adds anything to your brushing routine. “The critical question of any device like this is if it’s better than a toothbrush,” Hewlett told me. “It’s clear that using a toothbrush properly is one of the most effective things a person can do themselves to preserve their teeth.”
    #this #toothbrush #worst #thing #ive
    This $300 ‘Toothbrush’ Is the Worst Thing I’ve Ever Shoved in My Mouth
    Feno, the “smart electric toothbrush,” promised to take a two-minute toothbrushing routine and bring it down to 30 or even 20 seconds by swabbing each of my teeth at once. The Feno Smartbrush makes brushing faster, but in exchange it requires you to shove an entire mouthpiece in your piehole twice a day just to cut down on a total of three minutes of brushing time. If there is one thing to take away from this review, it’s that even if tech works, it doesn’t necessarily mean it’s better than what we already have. The “toothbrush” has been at the side of my bathroom sink for more than three weeks. It has technically saved me time. I would even go as far as to say it may do the job of a regular toothbrush with less time to get there. Still, given the choice, I would rather reach for my non-motorized, dentist-recommended toothbrush—if only because I know it works. After consulting with the companies and non-affiliated dentists, I’m more bemused that the Feno exists at all. This is a device that costs for the “Founder’s Edition” bundle. The company recently said it would increase the price to blaming tariffs for the rising cost. As the time of this publishing, that new price hasn’t yet materialized. The box comes with three canisters of brand-specific Feno Foam toothpaste. After you run out, you’ll need to spend to get an extra three canisters. Feno also recommends replacing the mouthpiece every three months, costing another Feno Smartbrush It may brush all your teeth for quicker cleans, but its too much of an unknown to recommend. Pros Cons My dentist gave me my last manual toothbrush for free. A tube of toothpaste was Despite the price, the company behind the smart toothbrush has one compelling pitch. If people were honest with themselves, most folks do not do the recommended amount of brushing. I fit into that camp for most of my life, until the point I went to my dentist and found I needed to have multiple caps on my molars, requiring I spend a hefty chunk of change for the privilege of having my teeth ground down to nubs. Since then, I’ve become very sensitive to the state of my pearly whites. I try to do the full two minutes of brushing and floss every day, but the Feno is supposed to help by shortening the brushing time and helpfully counting you down with an on-screen timer. My dentist was skeptical about the device’s claims, especially whether it was offering proper back-and-forth brushing technique. The American Dental Association has a Seal of Acceptance tested by the organization for products that are recommended by dentists. Neither Feno’s brush nor special toothpaste are on that list of products with the ADA seal. All I have to go on is Feno’s own claim that it’s doing what it needs to do to clean my teeth and remove plaque. For cleaning, the device makes use of pressure sensors alongside the mouthpiece’s 18,000 bristles, which Feno claims can hit 250 strokes per tooth in 20 seconds. It’s using a sweeping motion along the teeth, which dentists recommend when brushing, but there’s no published science to say the Feno is particularly better than other, similar devices. Feno told me the company has scientific research pertaining to how effective the device is, but it’s pending scientific review and won’t be available until some unknown date. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Feno revels in Silicon Valley startups’ worst habits. Every time you turn it on, the smart toothbrush bombards you with a QR code to download an app for all its controls, rather than including those on-device. The Feno toothbrush can incite the same gag reflex you know if you’ve ever played a contact sport requiring a mouth guard. The device is big enough that you have to open wide to fit the whole thing in at once. Brushing with the Feno is not an entirely passive experience, either. Feno’s founder, Dr. Kenny Brown, told me his company recommends moving the brush side to side while the mouthpiece actuates. On its highest settings, the Feno rattled my jaw and made my entire head shake like a marionette piloted by a mad puppeteer. With those speeds, I could feel the mouthpiece rubbing the inner cheek raw. At normal speeds, the Feno was uncomfortable but still usable without any pain. Feno also advises some gums may bleed if you haven’t been doing proper brushing technique for too long, but I didn’t find the bristles were any more abrasive to my gums than a regular toothbrush. The device running on default settings for 30 seconds seems engineered for most mouths. The company claims its device works with regular toothpaste, but when I plastered some gel to the bristles and stuck it in my mouth, it resulted in a sludgy mess at the bottom of the mouthpiece that took far longer to clean than the typical quick rinse. The foam toothpaste doesn’t leave your mouth full of the typical minty taste of fluoride and baking soda you normally associate with the feel of a clean mouth. As a point in favor of the Feno, that minty-fresh taste in your mouth isn’t actually indicative of clean teeth, according to Dr. Edmond Hewlett, a professor at UCLA’s School of Dentistry and a consumer advisor for the American Dental Association. Brown told me the company plans for updated toothpaste that adds a lingering minty taste in the mouth, as apparently I wasn’t the only one who spoke up on that lack of “clean” feeling. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Knowing my dentist appointments are near for even more fillings, the Feno would not only need to be good, but even better at cleaning my teeth than the typical electric brush. Even if I felt it may be hitting all my teeth, the device didn’t leave me feeling clean, not least because I had no control over it. Even if the Feno full-mouth toothbrush wasn’t uncomfortable, wasn’t expensive, didn’t require an app, and worked well with regular toothpaste, it would still be hard to make any claim it results in better cleaning than your regular toothbrush you buy from any local pharmacist. Using the smart toothbrush, you can’t tell what’s happening to your teeth. You can’t tell if it’s hitting all the nooks and crannies. That’s going to be a concern when everybody’s set of teeth is different. The Feno is supposedly designed so that all its bristles hit all different kinds of teeth at the correct 45-degree angle to the gums, but what really matters is if it adds anything to your brushing routine. “The critical question of any device like this is if it’s better than a toothbrush,” Hewlett told me. “It’s clear that using a toothbrush properly is one of the most effective things a person can do themselves to preserve their teeth.” #this #toothbrush #worst #thing #ive
    GIZMODO.COM
    This $300 ‘Toothbrush’ Is the Worst Thing I’ve Ever Shoved in My Mouth
    Feno, the “smart electric toothbrush,” promised to take a two-minute toothbrushing routine and bring it down to 30 or even 20 seconds by swabbing each of my teeth at once. The Feno Smartbrush makes brushing faster, but in exchange it requires you to shove an entire mouthpiece in your piehole twice a day just to cut down on a total of three minutes of brushing time. If there is one thing to take away from this review, it’s that even if tech works, it doesn’t necessarily mean it’s better than what we already have. The “toothbrush” has been at the side of my bathroom sink for more than three weeks. It has technically saved me time. I would even go as far as to say it may do the job of a regular toothbrush with less time to get there. Still, given the choice, I would rather reach for my non-motorized, dentist-recommended toothbrush—if only because I know it works. After consulting with the companies and non-affiliated dentists, I’m more bemused that the Feno exists at all. This is a device that costs $300 for the “Founder’s Edition” bundle. The company recently said it would increase the price to $400, blaming tariffs for the rising cost. As the time of this publishing, that new price hasn’t yet materialized. The box comes with three canisters of brand-specific Feno Foam toothpaste. After you run out, you’ll need to spend $30 to get an extra three canisters. Feno also recommends replacing the mouthpiece every three months, costing another $30. Feno Smartbrush It may brush all your teeth for quicker cleans, but its too much of an unknown to recommend. Pros Cons My dentist gave me my last manual toothbrush for free. A tube of toothpaste was $5. Despite the price, the company behind the smart toothbrush has one compelling pitch. If people were honest with themselves, most folks do not do the recommended amount of brushing. I fit into that camp for most of my life, until the point I went to my dentist and found I needed to have multiple caps on my molars, requiring I spend a hefty chunk of change for the privilege of having my teeth ground down to nubs. Since then, I’ve become very sensitive to the state of my pearly whites. I try to do the full two minutes of brushing and floss every day, but the Feno is supposed to help by shortening the brushing time and helpfully counting you down with an on-screen timer. My dentist was skeptical about the device’s claims, especially whether it was offering proper back-and-forth brushing technique. The American Dental Association has a Seal of Acceptance tested by the organization for products that are recommended by dentists. Neither Feno’s brush nor special toothpaste are on that list of products with the ADA seal. All I have to go on is Feno’s own claim that it’s doing what it needs to do to clean my teeth and remove plaque. For cleaning, the device makes use of pressure sensors alongside the mouthpiece’s 18,000 bristles, which Feno claims can hit 250 strokes per tooth in 20 seconds. It’s using a sweeping motion along the teeth, which dentists recommend when brushing, but there’s no published science to say the Feno is particularly better than other, similar devices. Feno told me the company has scientific research pertaining to how effective the device is, but it’s pending scientific review and won’t be available until some unknown date. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Feno revels in Silicon Valley startups’ worst habits. Every time you turn it on, the smart toothbrush bombards you with a QR code to download an app for all its controls, rather than including those on-device. The Feno toothbrush can incite the same gag reflex you know if you’ve ever played a contact sport requiring a mouth guard. The device is big enough that you have to open wide to fit the whole thing in at once. Brushing with the Feno is not an entirely passive experience, either. Feno’s founder, Dr. Kenny Brown, told me his company recommends moving the brush side to side while the mouthpiece actuates. On its highest settings, the Feno rattled my jaw and made my entire head shake like a marionette piloted by a mad puppeteer. With those speeds, I could feel the mouthpiece rubbing the inner cheek raw. At normal speeds, the Feno was uncomfortable but still usable without any pain. Feno also advises some gums may bleed if you haven’t been doing proper brushing technique for too long, but I didn’t find the bristles were any more abrasive to my gums than a regular toothbrush. The device running on default settings for 30 seconds seems engineered for most mouths. The company claims its device works with regular toothpaste, but when I plastered some gel to the bristles and stuck it in my mouth, it resulted in a sludgy mess at the bottom of the mouthpiece that took far longer to clean than the typical quick rinse. The foam toothpaste doesn’t leave your mouth full of the typical minty taste of fluoride and baking soda you normally associate with the feel of a clean mouth. As a point in favor of the Feno, that minty-fresh taste in your mouth isn’t actually indicative of clean teeth, according to Dr. Edmond Hewlett, a professor at UCLA’s School of Dentistry and a consumer advisor for the American Dental Association. Brown told me the company plans for updated toothpaste that adds a lingering minty taste in the mouth, as apparently I wasn’t the only one who spoke up on that lack of “clean” feeling. © Adriano Contreras / Gizmodo © Adriano Contreras / Gizmodo Knowing my dentist appointments are near for even more fillings, the Feno would not only need to be good, but even better at cleaning my teeth than the typical electric brush. Even if I felt it may be hitting all my teeth, the device didn’t leave me feeling clean, not least because I had no control over it. Even if the Feno full-mouth toothbrush wasn’t uncomfortable, wasn’t expensive, didn’t require an app, and worked well with regular toothpaste, it would still be hard to make any claim it results in better cleaning than your regular $7 toothbrush you buy from any local pharmacist. Using the smart toothbrush, you can’t tell what’s happening to your teeth. You can’t tell if it’s hitting all the nooks and crannies. That’s going to be a concern when everybody’s set of teeth is different. The Feno is supposedly designed so that all its bristles hit all different kinds of teeth at the correct 45-degree angle to the gums, but what really matters is if it adds anything to your brushing routine. “The critical question of any device like this is if it’s better than a toothbrush,” Hewlett told me. “It’s clear that using a toothbrush properly is one of the most effective things a person can do themselves to preserve their teeth.”
    0 Комментарии 0 Поделились 0 предпросмотр
  • Interview: Rom Kosla, CIO, Hewlett Packard Enterprise

    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector.
    “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.”
    Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations.
    “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.”
    Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte.
    “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.”

    The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out?
    “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.”
    Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets.
    “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE.
    “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’”
    HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets.
    Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities.
    “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.”

    Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes.
    “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot.
    Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance.

    “We spend time focusing on which areas offer the right to play and the right to win”
    Rom Kosla, Hewlett Packard Enterprise

    “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.”
    Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents.
    “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says.
    “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.”

    Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes.
    “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.”
    The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.”
    Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area.
    “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says.
    “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.”
    Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points.
    “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.”

    Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says.
    “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.”
    Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists.
    More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.”

    interviews with tech company IT leaders

    Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients.
    Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology.
    Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    #interview #rom #kosla #cio #hewlett
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise, joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligenceand data. He also oversees e-commerce, app development, enterprise resource planningand security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users. #interview #rom #kosla #cio #hewlett
    WWW.COMPUTERWEEKLY.COM
    Interview: Rom Kosla, CIO, Hewlett Packard Enterprise
    When Rom Kosla, CIO at Hewlett Packard Enterprise (HPE), joined the technology giant in July 2023, the move represented a big shift in direction. Previously CIO at retailer Ahold Delhaize and CIO for enterprise solutions at PepsiCo, Kosla was a consumer specialist who wanted to apply his knowledge in a new sector. “I liked the idea of working in a different industry,” he says. “I went from consumer products to retail grocery. Moving into the tech industry was a bit nerve-wracking because the concept of who the customers are is different. But since I grew up in IT, I figured I’d have the ability to navigate my way through the company.” Kosla had previously worked as a project manager for Nestlé and spent time with the consultancy Deloitte. Now approaching two years with HPE, Kosla leads HPE’s technology strategy and is responsible for how the company harnesses artificial intelligence (AI) and data. He also oversees e-commerce, app development, enterprise resource planning (ERP) and security operations. “The role has exceeded my expectations,” he says. “When you’re a CIO at a multinational, like when I was a divisional CIO at PepsiCo, you’re in the back office. Whether it’s strategy, transformation or customer engagement, the systems are the enablers of that back-office effort. At HPE, it’s different because we are customer zero.” Kosla says he prefers the term “customer gold” because he wants HPE to develop high-quality products. In addition to setting the internal digital strategy, he has an outward-facing role providing expert advice to customers. That part of his role reminds him of his time at Deloitte. “Those are opportunities to flex my prior experience and capabilities, and learn how to take our products, enable them, and share best practices,” he says. “HPE is like any other company. We use cloud systems and software-as-a-service products, including Salesforce and others. But underneath, we have HPE powering a lot of the capabilities.” The press release announcing Kosla’s appointment in 2023 said HPE believed his prior experiences in the digital front-end and running complex supply chains made him the perfect person to build on its digital transformation efforts. So, how has that vision panned out? “What’s been interesting is helping the business and IT team think about the end-to-end value stream,” he says. “There was a lot of application-specific knowledge. The ability for processes to be optimised at an application layer versus the end-to-end value stream was only happening in certain spots.” Kosla discovered the organisation had spent two years moving to a private cloud installation on the company’s hardware and had consolidated 20-plus ERP systems under one SAP instance. With much of the transformation work complete, his focus turned to making the most of these assets. “The opportunity was not to shepherd up transformation, it was taking the next step, which was optimising,” says Kosla, explaining how he had boosted supply chain performance in his earlier roles. He’s now applying that knowledge at HPE. “What we’ve been doing is slicing areas of opportunity,” he says. “With the lead-to-quote process, for example, we have opportunities to optimise, depending on the type of business, such as the channel and distributors. We’re asking things like, ‘Can we get a quote out as quickly as possible, can we price it correctly, and can we rely less on human engagement?’” HPE announced a cost-reduction programme in March to reduce structural operating costs. The programme is expected to be implemented through fiscal year 2026 and deliver gross savings of approximately $350m by fiscal year 2027, including through workforce reductions. The programme of work in IT will help the company move towards these targets. Kosla says optimisation in financials might mean closing books faster. In the supply chain, the optimisation might be about predicting the raw materials needed to create products. He takes a term from his time in the consumer-packaged goods sector – right to play, right to win – to explain how his approach helps the business look for value-generating opportunities. “So, do we have the right to play, meaning do we have the skills? Where do we have the right to win, meaning do we have the funding, business resources and availability to deliver the results? We spend time focusing on which areas offer the right to play and the right to win.” Kosla says data and AI play a key role in these optimisations. HPE uses third-party applications with built-in AI capabilities and has developed an internal chat solution called ChatHPE, a generative AI hub used for internal processes. “There are lots of conversations around how we unlock the benefits of AI in the company,” he says. Professionals across the company use Microsoft Copilot in their day-to-day roles to boost productivity. Developers, meanwhile, use GitHub Copilot. Finally, there’s ChatHPE, which Kosla says is used according to the functional use case. HPE started developing the platform about 18 months ago. A pipeline of use cases has now been developed, including helping legal teams to review contracts, boosting customer service in operations, re-using campaign elements in marketing and improving analytics in finance. “We spend time focusing on which areas offer the right to play and the right to win” Rom Kosla, Hewlett Packard Enterprise “We have a significant amount of governance internally,” says Kosla, referring to ChatHPE, which is powered by Azure and OpenAI technology. “When I started, there wasn’t an internal HPE AI engine. We had to tell the teams not to use the standard tools because any data that you feed into them is ultimately extracted. So, we had to create our platform.” Embracing AI isn’t Kosla’s only concern. Stabilisation is a big part of what he needs to achieve during the next 12 months. He returns to HPE’s two major transformation initiatives – the shift to private cloud and the consolidation of ERP platforms – suggesting that the dual roll-out and management of these initiatives created a significant number of incidents. “When I look back at PepsiCo, we had about 300,000 employees and about 600,000 tickets, which means two tickets per person per year. I said to the executive committee at HPE, ‘We have 60,000 employees, and we have a couple of million tickets’, which is an insane number. The goal was to bring that number down by about 85%,” he says. “Now, our system uptime is 99% across our quoting and financial systems. That availability allows our business to do more than focus on internal IT. They can focus on the customer. Stabilisation means the business isn’t constantly thinking about IT systems, because it’s a challenge to execute every day when systems are going down because of issues.” Kosla says the long-term aim from an IT perspective is to align the technology organisation with business outcomes. In financials, for example, he wants to produce the data analytics the business needs across the supply chain and operational processes. “We have embedded teams that work together to look at how we enable data, like our chat capabilities, into some of the activities,” he says. “They’ll consider how we reduce friction, especially the manual steps. They’ll also consider planning, from raw materials to the manufacturing and delivery of products. That work involves partnering with the business.” The key to success for the IT team is to help the business unlock value quicker. “I would say that’s the biggest part for us,” says Kosla. “We don’t even like to use the word speed – we say velocity, because velocity equals direction, and that’s crucial for us. I think the business is happy with what we’ve been able to achieve, but it’s still not fast enough.” Being able to deliver results at pace will rely on new levels of flexibility. Rather than being wedded to a 12-month plan that maps out a series of deliverables, Kosla wants his team to work more in the moment. Prior experiences from the consumer sector give him a good sense of what excellence looks like in this area. “You don’t need to go back to the top, go through an annual planning review, go back down, and then have the teams twiddling their thumbs while they wait for the OK,” he says. “The goal is that teams are constantly working on what’s achievable during a sprint window. Many companies take that approach; I’ve done it in my prior working life. I know what can happen, and I think flexibility will drive value creation.” Kosla says some of the value will come from HPE’s in-house developed technologies. “One of the things that makes this role fun is that there’s a significant amount of innovation the company is doing,” he says, pointing to important technologies, such as Morpheus VM Essentials virtualisation software, the observability platform OpsRamp, and Aruba Networking Access Points. “What I’m proud of is that we now show up to customers with comparability,” he says, talking about the advisory part of his role. “We can say, ‘Look, we use both products, because in some cases, it’s a migration over time.’ So, for example, when a customer asks about our observability approach, we can compare our technology with other providers.” Kosla reflects on his career and ponders the future of the CIO role, suggesting responsibilities will vary considerably according to sector. “Digital leaders still maintain IT systems in some industries,” he says. “However, the rest of the business is now much more aware of technology. The blurring of lines between business and IT means it’s tougher to differentiate between the two areas. I think we’ll see more convergence.” Kosla says a growing desire to contain costs often creates a close relationship between IT and finance leaders. Once again, he expects further developments in that partnership. He also anticipates that cyber will remain at the forefront of digital leaders’ priority lists. More generally, he believes all IT professionals are becoming more focused on business priorities. “I think the blurring will continue to create interesting results, especially in technology companies,” he says. “We want to do things differently.” Read more interviews with tech company IT leaders Interview: Joe Depa, global chief innovation officer, EY – Accounting firm EY is focused on ‘AI-ready data’ to maximise the benefits of agentic AI and enable the use of emerging frontier technologies for its business and clients. Interview: Cynthia Stoddard, CIO, Adobe – After nearly 10 years in post, Adobe’s CIO is still driving digital transformation and looking to deliver lasting change through technology. Interview: Tomer Cohen, chief product officer, LinkedIn – The professional social network’s product chief is leading the introduction of artificial intelligence for the firm’s in-house development processes and to enhance services for users.
    0 Комментарии 0 Поделились 0 предпросмотр
  • TSMC to White House: You Want US-Made Chips? Knock It Off With the Tariffs

    TSMC is signaling to the Trump administration that any plan to tariff foreign-made chips risks derailing the company’s billion investment in Arizona semiconductor factories.The warning comes after the Commerce Department solicited public comment on the US potentially tariffing foreign-made semiconductors to help encourage domestic chip manufacturing. In its letter to the agency, TSMC said such tariffs could threaten demand for electronics and reduce the company’s revenue.  “Diminished demand could create uncertainty around the timeline for the construction and operation of our Arizona fabs. It could also undermine TSMC’s financial capacity to timely execute its ambitious Arizona project,” the company said. TSMC—which manufactures chips for Apple, AMD, Nvidia, and even Intel—added that: “Lower market demand for our leading US customers’ products may consequently reduce demand for TSMC’s manufacturing capacity and service onshore.”In March, TSMC announced an additional billion investment in three new fabs in Arizona, for a total of six. But so far, only one of the fabs has started producing processors, forcing TSMC to rely on its factories in Taiwan for most chip manufacturing. As a result, the letter from TSMC urges the Trump administration to exclude the company from any semiconductor-related tariffs. “To allow investments such as TSMC Arizona to proceed expeditiously, the administration should exempt TSMC Arizona and other companies that have already committed to semiconductor manufacturing projects in the United States from tariffs or other import restrictions,” it said. The letter notes that the company’s Arizona site “will ultimately comprise around 30% of TSMC’s total worldwide capacity for 2nm and more advanced technology nodes,” which should also be enough to meet US demands. In addition, TSMC has already started construction on its third fab in Arizona, “which will initially use 2nm and later A16 process technology, featuring Super Power Rail, TSMC’s best-in-class backside power delivery solution.”Recommended by Our EditorsNumerous other companies and industry groups have also responding to the agency's request. In its letter, PC maker Dell said the effort to manufacture more chips in the US is “nascent and lacks the requisite infrastructure to supply these products at scale to meet current and increasing demand.” Meanwhile, Hewlett Packard Enterprise told the department: "HPE has no alternative but to import semiconductors for its US manufacturing operations. Imposing tariffs on those imported semiconductors would harm HPE's ability to maintain and expand its domestic manufacturing activities and retard US R&D and innovation ultimately to the detriment of national security and economic growth."But Intel, which manufactures chips in the US, took a slightly different view, noting the need to “Protect American Manufactured Semiconductor Wafers and Derivative Products.” “To sustain the US semiconductor industry and support global customers, policies must address structural disparities and incentivize US-based semiconductor manufacturing,” Intel said. “As foreign buyers increasingly design out US chips due to tariff-related costs, exempting goods with US-made semiconductors from these financial burdens is crucial.”The same letter calls for the Trump administration to exempt semiconductor wafers either made in the US “as well as wafers manufactured based on US-based process technologies and US-owned IP.” In addition, Intel wants exemptions for its supply chain, which includes chip-making equipment developed overseas. “While Intel is committed to building semiconductors in the US, fully localizing every element of the supply chain is economically unfeasible without significant cost increases and production delays,” the company added.
    #tsmc #white #house #you #want
    TSMC to White House: You Want US-Made Chips? Knock It Off With the Tariffs
    TSMC is signaling to the Trump administration that any plan to tariff foreign-made chips risks derailing the company’s billion investment in Arizona semiconductor factories.The warning comes after the Commerce Department solicited public comment on the US potentially tariffing foreign-made semiconductors to help encourage domestic chip manufacturing. In its letter to the agency, TSMC said such tariffs could threaten demand for electronics and reduce the company’s revenue.  “Diminished demand could create uncertainty around the timeline for the construction and operation of our Arizona fabs. It could also undermine TSMC’s financial capacity to timely execute its ambitious Arizona project,” the company said. TSMC—which manufactures chips for Apple, AMD, Nvidia, and even Intel—added that: “Lower market demand for our leading US customers’ products may consequently reduce demand for TSMC’s manufacturing capacity and service onshore.”In March, TSMC announced an additional billion investment in three new fabs in Arizona, for a total of six. But so far, only one of the fabs has started producing processors, forcing TSMC to rely on its factories in Taiwan for most chip manufacturing. As a result, the letter from TSMC urges the Trump administration to exclude the company from any semiconductor-related tariffs. “To allow investments such as TSMC Arizona to proceed expeditiously, the administration should exempt TSMC Arizona and other companies that have already committed to semiconductor manufacturing projects in the United States from tariffs or other import restrictions,” it said. The letter notes that the company’s Arizona site “will ultimately comprise around 30% of TSMC’s total worldwide capacity for 2nm and more advanced technology nodes,” which should also be enough to meet US demands. In addition, TSMC has already started construction on its third fab in Arizona, “which will initially use 2nm and later A16 process technology, featuring Super Power Rail, TSMC’s best-in-class backside power delivery solution.”Recommended by Our EditorsNumerous other companies and industry groups have also responding to the agency's request. In its letter, PC maker Dell said the effort to manufacture more chips in the US is “nascent and lacks the requisite infrastructure to supply these products at scale to meet current and increasing demand.” Meanwhile, Hewlett Packard Enterprise told the department: "HPE has no alternative but to import semiconductors for its US manufacturing operations. Imposing tariffs on those imported semiconductors would harm HPE's ability to maintain and expand its domestic manufacturing activities and retard US R&D and innovation ultimately to the detriment of national security and economic growth."But Intel, which manufactures chips in the US, took a slightly different view, noting the need to “Protect American Manufactured Semiconductor Wafers and Derivative Products.” “To sustain the US semiconductor industry and support global customers, policies must address structural disparities and incentivize US-based semiconductor manufacturing,” Intel said. “As foreign buyers increasingly design out US chips due to tariff-related costs, exempting goods with US-made semiconductors from these financial burdens is crucial.”The same letter calls for the Trump administration to exempt semiconductor wafers either made in the US “as well as wafers manufactured based on US-based process technologies and US-owned IP.” In addition, Intel wants exemptions for its supply chain, which includes chip-making equipment developed overseas. “While Intel is committed to building semiconductors in the US, fully localizing every element of the supply chain is economically unfeasible without significant cost increases and production delays,” the company added. #tsmc #white #house #you #want
    ME.PCMAG.COM
    TSMC to White House: You Want US-Made Chips? Knock It Off With the Tariffs
    TSMC is signaling to the Trump administration that any plan to tariff foreign-made chips risks derailing the company’s $165 billion investment in Arizona semiconductor factories.The warning comes after the Commerce Department solicited public comment on the US potentially tariffing foreign-made semiconductors to help encourage domestic chip manufacturing. In its letter to the agency, TSMC said such tariffs could threaten demand for electronics and reduce the company’s revenue.  “Diminished demand could create uncertainty around the timeline for the construction and operation of our Arizona fabs. It could also undermine TSMC’s financial capacity to timely execute its ambitious Arizona project,” the company said. TSMC—which manufactures chips for Apple, AMD, Nvidia, and even Intel—added that: “Lower market demand for our leading US customers’ products may consequently reduce demand for TSMC’s manufacturing capacity and service onshore.”In March, TSMC announced an additional $100 billion investment in three new fabs in Arizona, for a total of six. But so far, only one of the fabs has started producing processors, forcing TSMC to rely on its factories in Taiwan for most chip manufacturing. As a result, the letter from TSMC urges the Trump administration to exclude the company from any semiconductor-related tariffs. “To allow investments such as TSMC Arizona to proceed expeditiously, the administration should exempt TSMC Arizona and other companies that have already committed to semiconductor manufacturing projects in the United States from tariffs or other import restrictions,” it said. The letter notes that the company’s Arizona site “will ultimately comprise around 30% of TSMC’s total worldwide capacity for 2nm and more advanced technology nodes,” which should also be enough to meet US demands. In addition, TSMC has already started construction on its third fab in Arizona, “which will initially use 2nm and later A16 process technology, featuring Super Power Rail, TSMC’s best-in-class backside power delivery solution.”Recommended by Our EditorsNumerous other companies and industry groups have also responding to the agency's request. In its letter, PC maker Dell said the effort to manufacture more chips in the US is “nascent and lacks the requisite infrastructure to supply these products at scale to meet current and increasing demand.” Meanwhile, Hewlett Packard Enterprise told the department: "HPE has no alternative but to import semiconductors for its US manufacturing operations. Imposing tariffs on those imported semiconductors would harm HPE's ability to maintain and expand its domestic manufacturing activities and retard US R&D and innovation ultimately to the detriment of national security and economic growth."But Intel, which manufactures chips in the US, took a slightly different view, noting the need to “Protect American Manufactured Semiconductor Wafers and Derivative Products.” “To sustain the US semiconductor industry and support global customers, policies must address structural disparities and incentivize US-based semiconductor manufacturing,” Intel said. “As foreign buyers increasingly design out US chips due to tariff-related costs, exempting goods with US-made semiconductors from these financial burdens is crucial.”The same letter calls for the Trump administration to exempt semiconductor wafers either made in the US “as well as wafers manufactured based on US-based process technologies and US-owned IP.” In addition, Intel wants exemptions for its supply chain, which includes chip-making equipment developed overseas. “While Intel is committed to building semiconductors in the US, fully localizing every element of the supply chain is economically unfeasible without significant cost increases and production delays,” the company added.
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com