• VENTUREBEAT.COM
    2027 AGI forecast maps a 24-month sprint to human-level AI
    The newly published AI 2027 scenario offers a detailed 2 to 3-year forecast for the future that includes specific technical milestones.Read More
    0 Комментарии 0 Поделились 60 Просмотры
  • VENTUREBEAT.COM
    Identity as the new perimeter: NOV’s approach to stopping the 79% of attacks that are malware-free
    NOV’s CIO led a cyber strategy fusing Zero Trust, AI, and airtight identity controls to cut threats by 35x and eliminating reimaging.Read More
    0 Комментарии 0 Поделились 62 Просмотры
  • WWW.THEVERGE.COM
    Apple drops ‘available now’ from Apple Intelligence page
    Apple has stopped listing its Apple Intelligence features as “available now” following an inquiry from the National Advertising Division (NAD). Based on an archived webpage, it looks like Apple removed the claim from the top of its Apple Intelligence page in late March. The NAD, which is part of the nonprofit BBB National Programs, reviews national advertising campaigns for truthfulness. It recommended that Apple “discontinue or modify” its “available now” claim, saying it “reasonably conveyed the message” that AI-powered features like Priority Notifications, Genmoji, Image Playground, and a ChatGPT integration were available with the launch of the iPhone 16. The NAD also notes that the footnote attached to the claim was “neither sufficiently clear and conspicuous nor close to the triggering claims.”Apple previously said its AI features were ‘Available now.’ Screenshot: The Verge via Wayback MachineApple only rolled out some Apple Intelligence features when the iPhone 16 launched last year, such as writing tools and an AI feature to remove unwanted objects from photos. It added more AI features in later software updates. “While these features are now available, NAD recommended Apple avoid conveying the message that features are available when they are not,” the NAD said in the press release.Additionally, the NAD found that Apple similarly included its AI-supercharged Siri beneath the “available now” heading even though it still hasn’t arrived. In response, Apple said it has updated its promotional materials and disclosures to “adequately communicate their status.” The company also discontinued its “More Personal Siri” video, which showed actor Bella Ramsey using the voice assistant to pull up the name of a person they met months ago.“While we disagree with the NAD’s findings related to features that are available to users now, we appreciate the opportunity to work with them and will follow their recommendations,” Apple said in the press release.See More:
    0 Комментарии 0 Поделились 47 Просмотры
  • WWW.THEVERGE.COM
    How AI is reshaping wildlife conservation — for better or worse
    Over the wetlands of Senegal, researcher Alexandre Delplanque pilots a drone to count waterbirds: pelicans, flamingos, and terns. He flies the drone, but AI analyzes the images to count individuals in a flock, speeding up analysis by thousands of hours per survey, he estimates. And time is of the essence. Since 1970, wildlife populations have plummeted by over seventy percent. The world is in the throes of a biodiversity crisis and, according to some researchers, undergoing its sixth mass extinction. The planet has previously endured five mass extinction events, with the last ushering in the end of the Cretaceous period: the time of the infamous asteroid impact that unleashed a nuclear winter and killed the dinosaurs. That was sixty-six million years ago. To rescue species from the brink of extinction, first you have to know what you have, and how many – which is often easier said than done, especially in fields with a lot to count. Scientists estimate less than 20 percent of insect species on Earth have been identified. After AI reviewed just a week’s worth of camera trap footage in Panama, researchers say they found over 300 species previously unknown to science. Pelicans in Senegal. Image: Alexandre DelplanqueThe premise of AI in scientific research is not without critics. Proponents of high-tech in conservation cite the ability of AI to analyze large datasets in seconds that would otherwise take months, for AI to decipher patterns in species’ interactions and distributions undetectable to humans, and to unravel a dizzying array of genomes. Critics point to its environmental impact, potential for bias, and insufficient ethical standards.Much of AI work in conservation is focused on analyzing thousands of hours of footage taken from remote cameras or aerial surveys, but it’s unlikely to end there. For now, researchers are focused on processing footage with object detection models, a type of AI that can identify and locate objects within an image or video. These models are often built with Convolutional Neural Networks (CNNs) and are trained to identify species or detect their presence or absence. Projects employing AI to “save species” often generate a media frenzy. Researchers in South Africa generated a flurry of headlines asking if AI can save “the world’s loneliest plant.” Scientists deployed drones over inaccessible swathes of the dense Ngoye Forest in search of a female partner for a male cycad at London’s Kew Botanical Gardens. AI scanned the footage for signs of a species considered extinct in the wild, which researchers hope really isn’t extinct – just obscured under the canopy. But some say these headlines are overblown without considering the consequences. Counting pelicans using a drone equipped with cameras and AI in Senegal. Image: Alexandre Delplanque“There is a tidal wave of enthusiastic research about the applications of AI and much less critical research that looks at the costs, environmentally and socially,” said Hamish van der Ven, head of the Business, Sustainability, and Technology Lab at the University of British Columbia. The training process for an AI model, such as a large language model (LLM), can consume over a thousand megawatt hours of electricity. The less obvious problem, says Shaolei Ren, whose research focuses on minimizing the health impacts of AI, is the water consumption of data centers. Data centers house the infrastructure needed to provide the processing power for AI, and all the technology must be cooled down, usually via freshwater sourced from the local water supply. Due to its cooling needs, AI is projected to withdraw between 4.2 billion and 6.6 billion cubic meters of water annually by 2027, much of which is lost to evaporation. And the environmental impact is not equally felt, as tech giants export their data centers overseas. Google’s plan to construct new data centers in Latin America sparked massive protests in Chile and Uruguay, biodiverse regions already suffering from severe drought. “Data centers also create a public health crisis due to the air pollutants emitted, including fine particulate matter (PM2.5) and nitrous oxide (NOx),” said Ren. The public health burden triggered by data centers in the U.S. – primarily situated in low-income areas – is projected to cost twenty billion by 2030.“The models we’re running aren’t huge – they’re big for us, but it’s not like Social Network Big Data.”Yet the footprint of most biologists’ AI work, for the moment, is negligible. For his part, Delplanque has one local computer processing the images, and his HerdNet model – which aids in population counts of densely packed animals, such as elephants and antelopes on the savannah – took around twelve hours to train, compared to LLMs operating on massive servers that run for weeks during the training process.“We have this concern as scientists all the time: are we actually harming the environment that we’re trying to help? At least for the cases we’re talking about, I don’t think so, because the models we’re running aren’t huge – they’re big for us, but it’s not like Social Network Big Data,” says Laura Pollock, Assistant Professor in quantitative ecology at McGill University, who aims to deploy AI to extrapolate species interactions.But computational ecologist Tanya Berger-Wolf argues current low-power applications aren’t harnessing the full potential of the technology, referring to image recognition as “old-school AI.” Berger-Wolf and Pollock co-authored a paper exploring the “unrealized potential of AI” to expand biodiversity knowledge.“We want to go beyond scaling and speeding up what people already do to something new, like generating testable hypotheses or extracting unseen patterns and combinations,” says Berger-Wolf. “What we’ve been doing with AI so far is obvious, which is all of this rapid image detection and acoustic monitoring, but we should be doing much more than that: using AI to ask the right ecological questions,” says Pollock.One potential application that generates attention, to both applause and denunciation, is the concept of using AI to decode animal communication. The Earth Species Project is using generative AI and LLMs in hopes of building a translator to communicate with non-human life. There is also Project CETI, which focuses on using a similar approach to understand sperm whales, which communicate via morse-code-like clicks that, theoretically, can be deciphered. Already, scientists have managed to employ machine learning to suggest elephants address individuals in their family by unique names. But the larger premise of decoding animal communication raises ethical questions and concerns over success. In other words: Will it work? Is it a waste of resources to try? Should we talk to animals at all?Counting elephants using on the Ivory Coast with cameras attached to light-weight aircraft and AI. Image: Alexandre Delplanque“We have to choose where these models will make a difference, not just use them because you have a shiny new toy,” Berger-Wolf cautioned. Applications like LLMs foster a large environmental footprint, so it’s “irresponsible to spend resources if the research outcome does not change. And data is a resource.” Models are only as good as the data they’re trained on, which can potentially lead to bias and a misprioritization of conservation actions. One of the most common issues include spatial bias, where species are overrepresented in certain regions in data sets, and taxonomic bias, where charismatic species like pandas receive more funding and thus more data is readily available on them than, say, an obscure beetle. But AI can also bias our perceptions and even subtly shape the questions we’re asking, argued van der Ven, who authored a paper on how LLMs downplay environmental challenges. “There are far more options for AI to offer bias, extract resources, and drive overconsumption than there are conservation applications. If I could wave a wand and uninvent AI, I would,” he said. “If we weigh the benefits for conservation against how effective Amazon is using AI to get consumers to buy more things, it’s a vastly uneven scale.”In 2024, for its part, Google announced the deployment of an AI model to listen to coral reefs: SurfPerch. Bioacoustics play a key role in assessing reef stability – healthier reefs sound different – and SurfPerch analyzes audio signatures to measure the success of coral restoration efforts or identify impending threats. Around the time of the tool’s deployment, Google also announced it was falling short of pledged climate targets due to the environmental demands of AI. “It’s not hypocritical to use AI in conservation – it just needs to be used responsibly,” said Berger-Wolf. But when it comes to regulation, neither biodiversity nor AI neatly conform to geopolitical boundaries, she mused. See More:
    0 Комментарии 0 Поделились 44 Просмотры
  • (Many) More TDS Contributors Are Now Eligible for Earning Through the Author Payment Program
    A new, more inclusive earnings tier When we launched the TDS Author Payment Program back in February, our goal was clear: it was important for us “to reward the articles that help us reach our business goals in proportion to their impact.” Since the program’s launch, however, we realized that the number of articles that crossed the initial earnings threshold (5,000 engaged views) was smaller than we’d hoped for. That wasn’t ideal. One of the main advantages of being an independent publication—and a data-focused one, at that—is that we are nimble enough to course-correct when we need to, and can make changes quickly to the benefit of our contributors. We’re thrilled to share that we’ve recently introduced a new earnings tier: articles that gain 500 engaged views can now earn a minimum payout of $100. The immediate result, and the one we care about the most, is that the number of eligible articles will increase—drastically. A lower threshold will also lead, in many cases, to a much shorter wait before authors know whether an article will earn, providing an incentive for more frequent contributions. We didn’t want to penalize those authors who took a chance on us during the program’s early days, so we’re applying this inclusive earnings tier retroactively, to all eligible articles published since the program launched on February 19. (We’ve already contacted all authors who published on TDS in February and whose articles have crossed the 500 engaged-view threshold.) All other details concerning the Author Payment Program remain the same, so if you’ve already reviewed and accepted its terms and conditions, there’s no further action you need to take. Stats are live! Earnings are just one measure of an article’s reach and impact. Since launching the new TDS site, our authors’ most-requested feature—by a wide margin!—has been access to their articles’ stats. This was always on our roadmap (as we mentioned earlier, we are a data-focused publication, after all), but the consistent feedback we’ve received from our community made it clear that we needed to prioritize it, so we did. Good news: as of today, all published authors can track their articles’ performance directly from their dashboard on the TDS Contributor Portal — just look for the Analytics tab on the left side of your screen. You’ll be able to see your total views and engaged views (reminder: the latter are views by readers who spend at least 30 seconds on an individual article), as well as the number of total and engaged views during the 30-day earning period following publication. Also visible is the estimated payout for each article given the most current engaged-view count. These stats will give you a solid snapshot of your work’s reach over time, as well as a clear idea of how close you are to crossing each earning tier. Please keep in mind that stats update once a day, so while we understand the impulse to hit the Refresh button every 3 minutes (or seconds…), you can probably find a better use of your time—like brainstorming for your next article! As our publication continues to evolve, our team is hard at work on the next set of features that will improve TDS for readers and authors alike. Stay tuned—and feel free to reach out (at publication@towardsdatascience.com) with any questions, requests, or feedback you’d like to share. The post (Many) More TDS Contributors Are Now Eligible for Earning Through the Author Payment Program appeared first on Towards Data Science.
    0 Комментарии 0 Поделились 53 Просмотры
  • TOWARDSDATASCIENCE.COM
    When Physics Meets Finance: Using AI to Solve Black-Scholes
    DISCLAIMER: This is not financial advice. I’m a PhD in Aerospace Engineering with a strong focus on Machine Learning: I’m not a financial advisor. This article is intended solely to demonstrate the power of Physics-Informed Neural Networks (PINNs) in a financial context. When I was 16, I fell in love with Physics. The reason was simple yet powerful: I thought Physics was fair. It never happened that I got an exercise wrong because the speed of light changed overnight, or because suddenly ex could be negative. Every time I read a physics paper and thought, “This doesn’t make sense,” it turned out I was the one not making sense. So, Physics is always fair, and because of that, it’s always perfect. And Physics displays this perfection and fairness through its set of rules, which are known as differential equations. The simplest differential equation I know is this one: Image made by author Very simple: we start here, x0=0, at time t=0, then we move with a constant speed of 5 m/s. This means that after 1 second, we are 5 meters (or miles, if you like it best) away from the origin; after 2 seconds, we are 10 meters away from the origin; after 43128 seconds… I think you got it. As we were saying, this is written in stone: perfect, ideal, and unquestionable. Nonetheless, imagine this in real life. Imagine you are out for a walk or driving. Even if you try your best to go at a target speed, you will never be able to keep it constant. Your mind will race in certain parts; maybe you will get distracted, maybe you will stop for red lights, most likely a combination of the above. So maybe the simple differential equation we mentioned earlier is not enough. What we could do is to try and predict your location from the differential equation, but with the help of Artificial Intelligence. This idea is implemented in Physics Informed Neural Networks (PINN). We will describe them later in detail, but the idea is that we try to match both the data and what we know from the differential equation that describes the phenomenon. This means that we enforce our solution to generally meet what we expect from Physics. I know it sounds like black magic, I promise it will be clearer throughout the post. Now, the big question: What does Finance have to do with Physics and Physics Informed Neural Networks? Well, it turns out that differential equations are not only useful for nerds like me who are interested in the laws of the natural universe, but they can be useful in financial models as well. For example, the Black-Scholes model uses a differential equation to set the price of a call option to have, given certain quite strict assumptions, a risk-free portfolio. The goal of this very convoluted introduction was twofold: Confuse you just a little, so that you will keep reading Spark your curiosity just enough to see where this is all going. Hopefully I managed . If I did, the rest of the article would follow these steps: We will discuss the Black-Scholes model, its assumptions, and its differential equation We will talk about Physics Informed Neural Networks (PINNs), where they come from, and why they are helpful We will develop our algorithm that trains a PINN on Black-Scholes using Python, Torch, and OOP. We will show the results of our algorithm. I’m excited! To the lab! 1. Black Scholes Model If you are curious about the original paper of Black-Scholes, you can find it here. It’s definitely worth it Ok, so now we have to understand the Finance universe we are in, what the variables are, and what the laws are. First off, in Finance, there is a powerful tool called a call option. The call option gives you the right (not the obligation) to buy a stock at a certain price in the fixed future (let’s say a year from now), which is called the strike price. Now let’s think about it for a moment, shall we? Let’s say that today the given stock price is $100. Let us also assume that we hold a call option with a $100 strike price. Now let’s say that in one year the stock price goes to $150. That’s amazing! We can use that call option to buy the stock and then immediately resell it! We just made $150 – $150-$100 = $50 profit. On the other hand, if in one year the stock price goes down to $80, then we can’t do that. Actually, we are better off not exercising our right to buy at all, not to lose money. So now that we think about it, the idea of buying a stock and selling an option turns out to be perfectly complementary. What I mean is the randomness of the stock price (the fact that it goes up and down) can actually be mitigated by holding the right number of options. This is called delta hedging. Based on a set of assumptions, we can derive the fair option price in order to have a risk-free portfolio. I don’t want to bore you with all the details of the derivation (they are honestly not that hard to follow in the original paper), but the differential equation of the risk-free portfolio is this: Where: C is the price of the option at time t sigma is the volatility of the stock r is the risk-free rate t is time (with t=0 now and T at expiration) S is the current stock price From this equation, we can derive the fair price of the call option to have a risk-free portfolio. The equation is closed and analytical, and it looks like this: With: Where N(x) is the cumulative distribution function (CDF) of the standard normal distribution, K is the strike price, and T is the expiration time. For example, this is the plot of the Stock Price (x) vs Call Option (y), according to the Black-Scholes model. Image made by author Now this looks cool and all, but what does it have to do with Physics and PINN? It looks like the equation is analytical, so why PINN? Why AI? Why am I reading this at all? The answer is below : 2. Physics Informed Neural Networks If you are curious about Physics Informed Neural Networks, you can find out in the original paper here. Again, worth a read. Now, the equation above is analytical, but again, that is an equation of a fair price in an ideal scenario. What happens if we ignore this for a moment and try to guess the price of the option given the stock price and the time? For example, we could use a Feed Forward Neural Network and train it through backpropagation. In this training mechanism, we are minimizing the error L = |Estimated C - Real C|: Image made by author This is fine, and it is the simplest Neural Network approach you could do. The issue here is that we are completely ignoring the Black-Scholes equation. So, is there another way? Can we possibly integrate it? Of course, we can, that is, if we set the error to be L = |Estimated C - Real C|+ PDE(C,S,t) Where PDE(C,S,t) is And it needs to be as close to 0 as possible: Image made by author But the question still stands. Why is this “better” than the simple Black-Scholes? Why not just use the differential equation? Well, because sometimes, in life, solving the differential equation doesn’t guarantee you the “real” solution. Physics is usually approximating things, and it is doing that in a way that could create a difference between what we expect and what we see. That is why the PINN is an amazing and fascinating tool: you try to match the physics, but you are strict in the fact that the results have to match what you “see” from your dataset. In our case, it might be that, in order to obtain a risk-free portfolio, we find that the theoretical Black-Scholes model doesn’t fully match the noisy, biased, or imperfect market data we’re observing. Maybe the volatility isn’t constant. Maybe the market isn’t efficient. Maybe the assumptions behind the equation just don’t hold up. That is where an approach like PINN can be helpful. We not only find a solution that meets the Black-Scholes equation, but we also “trust” what we see from the data. Ok, enough with the theory. Let’s code. 3. Hands On Python Implementation The whole code, with a cool README.md, a fantastic notebook and a super clear modular code, can be found here P.S. This will be a little intense (a lot of code), and if you are not into software, feel free to skip to the next chapter. I will show the results in a more friendly way Thank you a lot for getting to this point Let’s see how we can implement this. 3.1 Config.json file The whole code can run with a very simple configuration file, which I called config.json. You can place it wherever you like, as we will see. This file is crucial, as it defines all the parameters that govern our simulation, data generation, and model training. Let me quickly walk you through what each value represents: K: the strike price — this is the price at which the option gives you the right to buy the stock in the future. T: the time to maturity, in years. So T = 1.0 means the option expires one unit (for example, one year) from now. r: the risk-free interest rate is used to discount future values. This is the interest rate we are setting in our simulation. sigma: the volatility of the stock, which quantifies how unpredictable or “risky” the stock price is. Again, a simulation parameter. N_data: the number of synthetic data points we want to generate for training. This will condition the size of the model as well. min_S and max_S: the range of stock prices we want to sample when generating synthetic data. Min and max in our stock price. bias: an optional offset added to the option prices, to simulate a systemic shift in the data. This is done to create a discrepancy between the real world and the Black-Scholes data noise_variance: the amount of noise added to the option prices to simulate measurement or market noise. This parameter is add for the same reason as before. epochs: how many iterations the model will train for. lr: the learning rate of the optimizer. This controls how fast the model updates during training. log_interval: how often (in terms of epochs) we want to print logs to monitor training progress. Each of these parameters plays a specific role, some shape the financial world we’re simulating, others control how our neural network interacts with that world. Small tweaks here can lead to very different behavior, which makes this file both powerful and delicate. Changing the values of this JSON file will radically change the output of the code. 3.2 main.py Now let’s look at how the rest of the code uses this config in practice. The main part of our code comes from main.py, train your PINN using Torch, and black_scholes.py. This is main.py: So what you can do is: Build your config.json file Run python main.py --config config.json main.py uses a lot of other files. 3.3 black_scholes.py and helpers The implementation of the model is inside black_scholes.py: This can be used to build the model, train, export, and predict. The function uses some helpers as well, like data.py, loss.py, and model.py. The torch model is inside model.py: The data builder (given the config file) is inside data.py: And the beautiful loss function that incorporates the value of is loss.py 4. Results Ok, so if we run main.py, our FFNN gets trained, and we get this. Image made by author As you notice, the model error is not quite 0, but the PDE of the model is much smaller than the data. That means that the model is (naturally) aggressively forcing our predictions to meet the differential equations. This is exactly what we said before: we optimize both in terms of the data that we have and in terms of the Black-Scholes model. We can notice, qualitatively, that there is a great match between the noisy + biased real-world (rather realistic-world lol) dataset and the PINN. Image made by author These are the results when t = 0, and the Stock price changes with the Call Option at a fixed t. Pretty cool, right? But it’s not over! You can explore the results using the code above in two ways: Playing with the multitude of parameters that you have in config.json Seeing the predictions at t>0 Have fun! 5. Conclusions Thank you so much for making it all the way through. Seriously, this was a long one Here’s what you’ve seen in this article: We started with Physics, and how its rules, written as differential equations, are fair, beautiful, and (usually) predictable. We jumped into Finance, and met the Black-Scholes model — a differential equation that aims to price options in a risk-free way. We explored Physics-Informed Neural Networks (PINNs), a type of neural network that doesn’t just fit data but respects the underlying differential equation. We implemented everything in Python, using PyTorch and a clean, modular codebase that lets you tweak parameters, generate synthetic data, and train your own PINNs to solve Black-Scholes. We visualized the results and saw how the network learned to match not only the noisy data but also the behavior expected by the Black-Scholes equation. Now, I know that digesting all of this at once is not easy. In some areas, I was necessarily short, maybe shorter than I needed to be. Nonetheless, if you want to see things in a clearer way, again, give a look at the GitHub folder. Even if you are not into software, there is a clear README.md and a simple example/BlackScholesModel.ipynb that explains the project step by step. 6. About me! Thank you again for your time. It means a lot  My name is Piero Paialunga, and I’m this guy here: I am a Ph.D. candidate at the University of Cincinnati Aerospace Engineering Department. I talk about AI, and Machine Learning in my blog posts and on LinkedIn and here on TDS. If you liked the article and want to know more about machine learning and follow my studies you can: A. Follow me on Linkedin, where I publish all my storiesB. Follow me on GitHub, where you can see all my codeC. Send me an email: piero.paialunga@hotmail.comD. Want to work with me? Check my rates and projects on Upwork! Ciao. P.S. My PhD is ending and I’m considering my next step for my career! If you like how I work and you want to hire me, don’t hesitate to reach out. The post When Physics Meets Finance: Using AI to Solve Black-Scholes appeared first on Towards Data Science.
    0 Комментарии 0 Поделились 59 Просмотры
  • WWW.USINE-DIGITALE.FR
    ElevenLabs déploie une fonctionnalité de transfert de conversations entre agents IA
    ElevenLabs accélère sur l'agentique. Dans le sillage des géants technologiques Google, Microsoft et Salesforce, la start-up vient de lâcher une...
    0 Комментарии 0 Поделились 64 Просмотры
  • WWW.YOUTUBE.COM
    OpenAI’s New Image Generator: An AI Revolution!
    OpenAI’s New Image Generator: An AI Revolution!
    0 Комментарии 0 Поделились 58 Просмотры
  • WWW.YOUTUBE.COM
    DeepSeek V3 - The King is Back…For Free!
    DeepSeek V3 - The King is Back…For Free!
    0 Комментарии 0 Поделились 57 Просмотры
  • WWW.YOUTUBE.COM
    China’s DeepSeek - A Balanced Overview
    China’s DeepSeek - A Balanced Overview
    0 Комментарии 0 Поделились 52 Просмотры