• Microsoft SharePoint Connector Flaw Could've Enabled Credential Theft Across Power Platform
    thehackernews.com
    Feb 04, 2025Ravie LakshmananVulnerability / SharePointCybersecurity researchers have disclosed details of a now-patched vulnerability impacting the Microsoft SharePoint connector on Power Platform that, if successfully exploited, could allow threat actors to harvest a user's credentials and stage follow-on attacks.This could manifest in the form of post-exploitation actions that allow the attacker to send requests to the SharePoint API on behalf of the impersonated user, enabling unauthorized access to sensitive data, Zenity Labs said in a report shared with The Hacker News ahead of publication."This vulnerability can be exploited across Power Automate, Power Apps, Copilot Studio, and Copilot 365, which significantly broadens the scope of potential damage," senior security researcher Dmitry Lozovoy said."It increases the likelihood of a successful attack, allowing hackers to target multiple interconnected services within the Power Platform ecosystem."Following responsible disclosure in September 2024, Microsoft addressed the security hole, assessed with an "Important" severity assessment, as of December 13.Microsoft Power Platform is a collection of low-code development tools that allow users to facilitate analytics, process automation, and data-driven productivity applications.The vulnerability, at its core, is an instance of server-side request forgery (SSRF) stemming from the use of the "custom value" functionality within the SharePoint connector that permits an attacker to insert their own URLs as part of a flow.However, in order for the attack to be successful, the rogue user will need to have an Environment Maker role and the Basic User role in Power Platform. This also means that they would need to first gain access to a target organization through other means and acquire these roles."With the Environment Maker role, they can create and share malicious resources like apps and flows," Zenity told The Hacker News. "The Basic User role allows them to run apps and interact with resources they own in Power Platform. If the attacker doesn't already have these roles, they would need to gain them first."In a hypothetical attack scenario, a threat actor could create a flow for a SharePoint action and share it with a low-privileged user (read victim), resulting in a leak of their SharePoint JWT access token.Armed with this captured token, the attacker could send requests outside of the Power Platform on behalf of the user to whom access was granted to. That's not all. The vulnerability could be extended further to other services like Power Apps and Copilot Studio by creating a seemingly benign Canvas app or a Copilot agent to harvest a user's token, and escalate access further."You can take this even further by embedding the Canvas app into a Teams channel, for example," Zenity noted. "Once users interact with the app in Teams, you can harvest their tokens just as easily, expanding your reach across the organization and making the attack even more widespread.""The main takeaway is that the interconnected nature of Power Platform services can result in serious security risks, especially given the widespread use of the SharePoint connector, which is where a lot of sensitive corporate data is housed, and it can be complicated to ensure proper access rights are maintained throughout various environments."The development comes as Binary Security detailed three SSRF vulnerabilities in Azure DevOps that could have been abused to communicate with the metadata API endpoints, thereby permitting an attacker to glean information about the machine's configuration.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Commentarii ·0 Distribuiri ·126 Views
  • Tesla Sales Plunge 63% in EUs Second-Biggest EV Market
    www.bloomberg.com
    Tesla Inc. registrations plummeted 63% last month in France, the European Unions second-biggest market for battery-electric vehicles.The manufacturer registered only 1,141 carsBloomberg Terminal in January, the fewest since August 2022, according to French industry association La Plateforme Automobile. Tesla underperformed the overall industry and total EV sales, which dipped 6.2% and 0.5%, respectively.
    0 Commentarii ·0 Distribuiri ·130 Views
  • The Art of Ismail Inceoglu
    www.iamag.co
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    0 Commentarii ·0 Distribuiri ·203 Views
  • Takaya Imamura On Nintendo's Future: Talented And Charismatic Devs Are "Crucial"
    www.nintendolife.com
    Relying exclusively on big IP would be a "big mistake".Former Nintendo artist Takaya Imamura has kicked off the new year with a new blog post titled "Why I Left Nintendo" more than four years after departing from the company's Japanese headquarters.While he goes into all sorts of history about his career, he's also shared his thoughts about Nintendo's future and how "crucial" talented and charismatic developers are to the success of the company and the value of its IP.Read the full article on nintendolife.com
    0 Commentarii ·0 Distribuiri ·127 Views
  • Nintendo Music Adds Classic Super Mario SNES Soundtrack, Here's Every Song Included
    www.nintendolife.com
    Wahoo!Nintendo has released its first 'Nintendo Music' soundtrack of February. This time it's from the Super Nintendo title Super Mario World, which originally made its debut in the early '90s.Read the full article on nintendolife.com
    0 Commentarii ·0 Distribuiri ·136 Views
  • AI agents for e-commerce startup, founded by Google and DeepMind alums, raises $10M seed
    techcrunch.com
    AI is changing how we shop online, making our experiences more personalized. Smart assistants recommend products, negotiate deals, and even handle customer service. Big retailers and smaller businesses are using AI to improve search, supply chains, and checkout.If AI companies (and their investors) have their way, shopping will soon be focused on chatting with an assistant, with businesses automating everything behind the scenes. Dubai-based Qeen.ai(stylized asqeen.ai)is working to make this a reality in the Middle East and beyond. The startup has raised $10 million to scale its platform, which provides autonomous AI agents for e-commerce businesses.Prosus Ventures, a major e-commerce investor, led the seed round, which is not only one of the largest in the Middle Easts AI industry but in MENA overall. The VC believes Qeen.ai is well-positioned to bring AI-driven automation to merchants as AI agents reshape online marketplaces.Founders Morteza Ibrahimi (CEO), Ahmad Khwlieh (CTO), and Dina Alsamhan (CBO) started Qeen after years of working on AI at Google and DeepMind.Ibrahimi, in an interview with TechCrunch, said that they honed in on e-commerce in part opportunistically: all three had worked at Google Ads in various roles during their time with the search giant, and they and saw first-hand how other alums built highly successful e-commerce businesses. On top of their AI expertise, the trio knew how to run ads and optimize SEO exceptionally well and thought it could be a strong combination.Google and DeepMind backgroundE-commerce has been steadily growing for years, but apart from certain spikes (particularly during holiday periods) it still accounts for between 15% and 20% of retail sales (even in a mature market like the U.S. it was only just over 16% at of the last quarter, per the U.S. Census Bureau). Qeens thesis is that this could grow if the e-commerce processes were run better. Success in e-commerce, they believed, should be about great products and operational efficiency not just who can game the ad system best. That insight led them to build a platform that helps e-commerce sellers grow without relying on ads as their primary driver.The global e-commerce market is expanding fast, driven by changing consumer behavior, digital payments, and better logistics. In MENA, the market is expected to hit $50 billion by 2025, with Saudi Arabia and the UAE leading the growth.Qeen.ai is tapping into this boom by developing AI-powered marketing agents designed for e-commerce businesses across MENA. These fully automated agents handle content creation, marketing, and conversational sales, allowing small and mid-sized merchants to compete without relying on expensive agencies or deep ad expertise. Unlike traditional solutions, Qeens AI continuously learns from consumer interactions using its proprietary RL-UI technology, refining marketing strategies in real-time for better results.From Google ads to AI-driven e-commerceWhile AI-powered sales and customer service tools often struggle with high churn rates, as businesses frequently switch platforms, Qeen.ai claims to see stronger retention in e-commerce and marketing. Ibrahimi attributes this to how deeply Qeens AI agents integrate into merchants workflows, making them a core part of daily operations rather than a replaceable tool.A key feature driving engagement is dynamic text personalization, which adjusts content based on user behavior and device type. For example, an iPhone user might see product details in bullet points for quick reading, while a laptop user gets a detailed paragraph. Since launching its Dynamic Content agent in Q2 2024, Qeen.ai has served over 15 million users, generated 1 million SKU descriptions, and helped merchants increase sales by 30%, according to the company.We worked with a client to optimize their content and SEO. After using our AI plugins, their search volume increased by 40%, and their Google ranking improved from 22 to 18all with zero manual effort. The entire process was fully autonomous, said Ibrahimi, giving another instance where Qeens AI capabilities have shone.Qeen employs a subscription-based pricing model and incorporates value-based pricing, a growing trend in AI services. Currently, qeen generates revenue through two subscription models: content automation, where businesses pay per active SKU, typically $0.10 to $0.20 per SKU per month. Then its AI marketing agent whose pricing is based on per-interaction volume.Ibrahimi declined to disclose the number of businesses using qeen, as well as revenue growth metrics. Notable clients include Dubai Store, 6th Street, and Jumia.Standing out with talentIbrahimi left DeepMind in early 2023 to co-found the startup. That same year, the company raised a $2 million pre-seed round before launching its product in June 2024. With its recent $10 million seed round, qeen has raised a total of $12 million in under a year.During this time, AI-powered marketing agents have gained traction worldwide, with several startups, particularly in the U.S. and Europe, entering the space. Competitors like YC-backed Unusual and Rankai are tackling similar challenges, so how does Qeen stand out?Well, for one, most of these AI startups focus on developed markets, while Qeen instead is prioritizing the Middle East firsta region largely underserved by AI-driven marketing automation tools. According to Ibrahimi, Qeen will serve small businesses across MENA, establish a strong foothold, and then expand globally.Deep tech expertise and a strong talent pool give Qeen an edge over new entrants, the chief executive added. Two of its co-founders earned PhDs in AI over a decade ago, long before AI became mainstream. Ibrahimi himself previously led a DeepMind research team specializing in self-learning, goal-driven AI agentsthe same technology that now powers Qeen.One of the most exciting things weve seen is the quality of AI talent here, Ibrahimi said. Weve attracted great talent both locally and internationallypeople have left the Bay Area, Europe, and the UK to come here and build with us.Qeen.ai currently employs over 25 people across the UAE and Jordan.The seed funding will support qeens growth strategy by expanding its AI platform, scaling its team, and attracting more customers, it said. Wamda Capital, 10X Founders Fund, and Dara Holdings are among the other investors in this round.
    0 Commentarii ·0 Distribuiri ·146 Views
  • Neuralk-AI is developing AI models specifically designed for structured data
    techcrunch.com
    Tabular data is a broad term that encompasses structured data that generally fits into a specific row and column. It can be a SQL database, a spreadsheet, a .CSV file, etc.While there has been tremendous progress on artificial intelligence applied to unstructured and sequential data, these large language models are fuzzy by design. They are built to manipulate input tokens to generate a coherent output without necessarily following a fixed structure. The best LLMs are also either expensive to access via an API or expensive to run on your own cloud infrastructure.And yet, many companies already have a data strategy with a data warehouse or data lake to centralize all important data and some data scientists that can leverage this data to improve the companys strategy. French startup Neuralk-AI is an artificial intelligence company has been working on AI models focused on tabular data. The company this week announced $4 million in funding.Data with real value for companies is data that was identified a long time ago, structured in the form of a table, and used by the data scientists of these companies to create all their machine learning algorithms, Nueralk-AI co-founder and Chief Scientist Officer Alexandre Pasquiou told TechCrunch. Neuralk-AI thinks theres an opportunity in revisiting AI model development, but with a specific focus on structured data. At first, it plans to offer its model as an API to data scientists working for commerce companies because these companies love data think product catalogs, customer databases, shopping cart trends, etc.Today, LLMs are great for search, natural user interaction, and answering questions based on unstructured documents. But it has some limitations the moment we go back to classic machine learning, which is really based on classic tabular data, Pasquiou said.With Neuralk-AI, retailers can automate complex data workflows with smart deduplication and enrichment. But they could also use the companys models to detect fraud, optimize the product recommendations, and generate sales forecasts that could be used for inventory management and product pricing.Fly Ventures led the companys $4 million round with SteamAI also participating. Several business angels also invested in the startup, such as Thomas Wolf from Hugging Face, Charles Gorintin from Alan, and Philippe Corrot and Nagi Letaifa from Mirakl.The team is still actively working on its models. It plans to test with a group of leading French retailers and commerce startups, such as E.Leclerc, Auchan, Mirakl, and Lucky Cart.Within three or four months, well release the first version of our model and the public benchmark on which well be able to rank our model compared to the state-of-the-art in this space, Pasquiou said. And in September, the idea is to be the best tabular foundation model in everything related to representation learning.
    0 Commentarii ·0 Distribuiri ·131 Views
  • Conversational-Amplified Prompt Engineering Is Gaining Traction In Generative AI
    www.forbes.com
    Conversational-amplified prompt engineering (CAPE) is increasingly being utilized by savvy users of ... [+] generative AI and large language models (LLMs).gettyIn todays column, I showcase a prompt engineering technique that I refer to as conversational-amplified prompt engineering (CAPE). Some also use the shorter moniker of conversational prompt engineering (CPE) though that is a bit confusing since it has a multitude of other meanings. In any case, the underlying concept is that you can substantively improve your prompting by carrying on a conversation with generative AI and large language models (LLMs), such that the AI pattern-matches and trains to how you write your prompts. This provides numerous benefits.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).Importance Of Prompt EngineeringReaders might recall that I previously posted an in-depth depiction of over fifty prompt engineering techniques and methods, see the link here. There is no doubt that undertaking proficient prompt engineering is a learned skill. Top-notch prompt engineers know that leveraging various proven prompting techniques is the best way to get the most out of generative AI and likely garner you some good bucks for your skilled deeds.The usual approach to prompting is a one-and-done method. You enter a prompt, wait to see what response you get, and then either move on or possibly craft another prompt to clarify or further expand on the prompt that you just entered. It is a rinse and repeat kind of cycle.There is an interesting twist that few seem to realize.The trick is this. Generative AI is highly capable of identifying patterns in how humans write. Indeed, the initial data training for LLMs is done by widely scanning the Internet for human composed essays, narratives, stories, poems, and the like. Via computational and mathematical pattern-matching, AI figures out the underlying patterns associated with human compositions. Thats how AI is so seemingly fluent when conversing with the user.MORE FOR YOUWe can use that same pattern-matching facility to catch onto how someone tends to write their prompts. It is readily feasible to essentially data-train the generative AI on your prompting style. Henceforth, the AI will be more likely to interpret your prompts as per what you have in mind and not wander afield of what you intend.The twist then is that you can train generative AI on the nature of your prompting and garner handy benefits accordingly when performing ongoing prompt engineering. Easy-peasy.Boom, drop the mic.Conversational-Amplified Prompt EngineeringI refer to this specialized technique as conversational-amplified prompt engineering (CAPE). The naming is logical and sensible. You are going to engage in a focused conversation with generative AI and do so to have the AI pattern-match on your prompting style. The emphasis during that focused conversation will be how you write prompts. It isnt especially crucial as to what you are prompting about, and more so the nature of how you compose your prompts.In that sense, the prompt engineering that you subsequently undertake will be amplified because of that training. The AI will allow you to shortcut your prompts and still get the gist of your intentions. The AI will pretty much get the drift of your prompts due to having already examined and patterned them.Here are some outstanding benefits due to the CAPE technique:(a) Generative AI will be able to undertake personalized prompt interpretations.(b) Reduces the overall prompt engineering effort required by the user.(c) Increases efficiency since you dont have to be laborious in your prompts.(d) Enhances prompting including inclusion of popular prompt engineering techniques.(e) Promotes adaptation to domain-specific language or instructions.(f) Saves on cost because of less miscast prompt clarifications.That last point is one that I particularly point out during my classes on prompt engineering. One of the biggest aims of CAPE is that the AI will home in on what you want the AI to do. The beauty is this. You perhaps at times find yourself redoing a prompt several times, vigorously trying to get the AI to get the substance of what you are asking. Each of those tries will cost you in terms of time and a transaction fee by the AI system if you are paying for your AI usage.By training generative AI on your prompting style, there is a lessened chance of multiple trials and a heightened chance that the AI will garner what your prompt involves straightaway.Example #1 Of Simple CAPE I will provide a few quick examples to illustrate how conversational-amplified prompt engineering works.I logged into generative AI to craft these examples. In this instance, I opted to use ChatGPT since it is widely popular and achieves about 300 million weekly active users. The same aspects shown here can be undertaken in any of the other major generative AI such as Claude, Gemini, CoPilot, Llama, etc.For this first example, I ask the generative AI to summarize an article for me. After the AI does so, I will provide feedback so that the AI then can pattern on my prompting style.My entered prompt: "Summarize this article.Generative AI response: Heres a summary. Let me know if you want a different focus.My entered prompt: Looks good, but I prefer bullet points over paragraphs.Generative AI response: "Got it! Heres the summary in bullet points.My entered prompt: Thanks, I want you to remember that when I ask for summaries, I normally intend that bullet points are to be used rather than paragraphs.Generative AI response: I will remember that preference and abide by it accordingly.In this instance, the AI initially defaulted to giving the summary in a narrative or paragraph form. Thats fine but not what I usually want. I had the generative AI redo the summary into my preference of bullet points.In addition, and this is crucial, I explicitly told the AI that henceforth when I ask for summaries, I want them shown in bullet form. The AI might not have otherwise detected that this is a discernible pattern of what I prefer. You will likely realize that this now becomes a handy-dandy shortcut for me. Every time that I ask for a summary, I wont have to indicate in my prompt that I want bullet points, nor will I need to do a follow-up prompt to get bullet points. The AI will merely always proceed to give me summaries in bullet point form.Happy face.When I bring up this example in class, there is usually a smarmy person that objects and says that I am trapping myself into an infinite world of bullet points. The AI will seemingly never do anything for me other than bullet points. Though that might on the surface seem to be the case, in practice it isnt so. All you need to do is state in a subsequent prompt that you dont want the bullet points on a particular occasion. Furthermore, if you decide that you no longer want an always-on bullet points summarization, just tell the AI that the pattern is no longer to be used.Problem solved.Example #2 Of Simple CAPEI went ahead and did some training interactions with generative AI to get it used to what I say in my prompts. The number of those interactions can be relatively modest, perhaps a dozen or so, as long as you explicitly direct the AI to be watchful during those interactions. You will need to get the AI to identify what patterns it has detected and then refine or correct them accordingly.To give you an example of how this pays off, I asked the generative AI about my car and the response was nicely crafted by the AI.Take a look.My entered prompt: What should I do about my car?Generative AI response: You have previously noted that you have two cars. One of them is a new car that you just recently bought. The other is a dozen years old and has been having frequent car troubles. When you mention what you should do about your car, should I generally assume that you are referring to the older car since it is the one that has troubles?My entered prompt: Yes, you can generally assume that if I am asking questions about my car that I am likely referring to my older car, thanks.Generative AI response: Will do.You can see that the generative AI had picked up on the fact that I have two cars, one of which is older and likely to be the one that has troubles. My prompt in this example was ambiguous about which of the two cars I was referring to. The AI was able to clarify and guess which car I had in mind, doing so aptly because of patterning on my earlier prompts.Example #3 Of Simple CAPEYou might recall that in my first example, I mentioned that you can readily override a pattern that the AI has about your prompting style.This example illustrates that facet.My entered prompt: Summarize this article about AI and quantum theory.Generative AI response: I will summarize the article and do so in a series of bullet points. That style is what you usually seem to prefer in terms of my preparing summaries.My entered prompt: In this case, I want the summary to be very detailed and in-depth. Thus, this is an exception to my usual preference.Generative AI response: OK, I will prepare a detailed and in-depth summary this time. I will continue with the bullet points style thereafter.My entered prompt: Yes, thats good, thanks, please proceed.Voila, it is as easy as falling off a log to recast the prompting style that the AI has patterned.Advanced Aspects About CAPEWhen you do the training regarding your prompting style, there are three main practices that I highly recommend:(1) Big picture. Converse to establish how you prompt overall.(2) Particular focus. Converse to establish specific prompting situations.(3) Feedback oriented. Provide feedback focused on your prompting preferences.The first angle to undertake is to have the generative AI data-train on your overall prompting style. Just do a wide array of prompts as though you are playing around with AI. The wider the types of prompts and questions, the better. Use open-ended questions. Use closed-ended questions that derive a precise answer. And so on.The second angle is to converse with the AI on any specific considerations about your prompts. For example, a healthcare professional might tend to use generative AI for healthcare specific questions and problem solving. In that case, it would be useful to carry out a series of prompts with AI that are immersed in the healthcare domain. This allows the AI to pattern on those kinds of customized prompts.The third angle is to provide feedback to the AI as it is attempting to pattern on your prompts. Do not assume that the AI is patterning in the right way. It might not be. Make sure to directly ask what patterns it is detecting. You can then give further guidance to make sure the patterns are apt.Research On The CAPE TopicConversational-amplified prompt engineering is the naming convention that I use for this innovative advanced technique of prompt engineering. There is AI research about prompt engineering that fits to this depiction, though the phrasing is often stated as conversational prompt engineering (CPE) instead of saying CAPE. The heartburn I have with CPE as a moniker is that this means different things to different AI insiders. To help clarify, I use CAPE as a naming convention.One popular avenue of research on this approach entails using a dedicated front-end tool that aids in patterning on a prompting style. If you are interested in that kind of research, a recent paper entitled Conversational Prompt Engineering by Liat Ein-Dor, Orith Toledo-Ronen, Artem Spector, Shai Gretz, Lena Dankin, Alon Halfon, Yoav Katz, and Noam Slonim, arXiv, August 8, 2024, makes these salient points (excerpts):Prompts are how humans communicate with LLMs. Informative prompts are essential for guiding LLMs to produce the desired output.However, prompt engineering is often tedious and time-consuming, requiring significant expertise, limiting its widespread use.We propose Conversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks. CPE uses a chat model to briefly interact with users, helping them articulate their output preferences and integrating these into the prompt.The process includes two main stages: first, the model uses user-provided unlabeled data to generate data-driven questions and utilize user responses to shape the initial instruction. Then, the model shares the outputs generated by the instruction and uses user feedback to further refine the instruction and the outputs.Whether you use a specialized tool or merely rely on generative AI pattern-matching generically, the underlying concepts and practices are roughly the same.When CAPE Comes To The RescueLets cover the when and where of conversational-amplified prompt engineering.Users that infrequently use generative AI are not likely to be candidates for using a conversational-amplified prompt engineering technique since they rarely use AI and probably arent pushing boundaries on their prompting. Thus, this technique is not for everyone and nor is it a silver bullet.On the other hand, anyone frequently using generative AI, and especially stretching the boundaries on your prompts, they would greatly benefit from this approach. It doesnt take much time and effort to do the data-training. Plus, once youve done the upfront heavy lifting, things are good to go.A final thought for now.They say that practice makes perfect. It is a commonly noted piece of sage advice. I suppose the meaning is clearcut and reasonable.Vince Lombardi famously said this in contrast: Practice does not make perfect. Only perfect practice makes perfect. Shocker. Contrarian. An intriguing alternative perspective.Bottom-line is this. Go ahead and proceed with conversational-amplified prompt engineering if thats going to be to your advantage when using generative AI. Remember that its not just the practice alone that will work the magic, it is also shall-we-say perfectly practicing that really counts.Sage advice, indeed.
    0 Commentarii ·0 Distribuiri ·120 Views
  • Investors Urge EU To Keep Green Rules In Face Of Trump Turmoil
    www.forbes.com
    U.S. President Donald Trump speaks with European Commission President Ursula von der Leyen prior to ... [+] their meeting at the World Economic Forum in Davos in January 2020.AFP via Getty ImagesMore than 200 investor groups and financial institutions managing a total of $6.8 trillion in assets have urged the EU not to water down green finance rules, making the case that Europes sustainability regulations provide a stabilizing counterweight to the unpredictability of the regime of returning U.S. President Donald Trump.In a joint statement delivered Tuesday, the investors called on the European Commission to "preserve the integrity and ambition" of the EUs sustainable finance regulations. Far from making Europe uncompetitive, the signatories said, the current rules helped offer long-term stability in an increasingly uncertain global economic climate."Pursuing climate action and economic growth in tandem is not just possible, but necessary," Stephanie Pfeifer, CEO for the Institutional Investors Group on Climate Change, told me. "If the EU can showcase it is committed to these principles, it will give investors and companies the confidence to invest for the long-term in the technologies, companies and sectors that will drive both economic growth and decarbonization.""Climate solutions and the transition of key sectors of the economy represent one of the biggest investment opportunities of the 21st century," Pfeifer added, saying that such investments would "help to wean Europe off its dependency on fossil fuel and power tomorrows economy with cheap, reliable and sustainable energy."The warning from investors is the latest in a series of responses to a November announcement by the European Commission President, Ursula von der Leyen, that the bloc was seeking to "streamline" the financial reporting requirements built into the EUs Green Deal laws. At that time, more than 50 major companies, including IKEA, H&M and Nestl issued an open letter to the EU, urging that it "stay committed to protecting and restoring nature and biodiversity, and combatting climate change."MORE FOR YOUEuropean business lobby groups and some European political powers, including the government of France, have been calling on the European Commission to weaken sustainability reporting rules. Writing on social media platform X last month, French European affairs minister Benjamin Haddad wrote: "Our companies need simplification, not additional administrative burdens."Speculation that the European Commission could try to weaken EU financial reporting requirements has intensified since Trumps return to the White House, where he has undertaken a program of systematically censoring language around climate change, sustainability and human rights across the new administration. At the same time, Trumps incoming sweeping tariffs, targeting neighboring countries Canada and Mexico, as well as the EU bloc, have thrown global markets into turmoil.But Aleksandra Palinska, executive director at the European Sustainable Investment Forum said such uncertainty only served to strengthen the case for strong green finance rules. Highlighting the 2024 Draghi report, which identified an $825 billion funding gap for Europe to achieve its industrial decarbonization and competitiveness objectives, Palinska said: To play their role, investors need quality, reliable and comparable corporate disclosures, including on sustainability risks and impacts. Sweeping changes to these [green finance] rules, before they are fully implemented, will create regulatory uncertainty and are likely to hinder the contribution investors can make to sustainable growth.In earlier responses, Franois Gemenne, a professor at HEC Paris and a lead author of the Intergovernmental Panel on Climate Changes sixth assessment report, told Sustainable Views magazine that "the best response to the policies implemented in the U.S. is to beef up the EU green agenda, not to weaken it. Rather than follow Trumps way, we should design our own path."The key to competitiveness is access to cheap and abundant energy," Gemenne said. "We dont have fossil fuels in Europe, and so we cannot drill, baby, drill. But we can produce renewables and make energy prices go down."
    0 Commentarii ·0 Distribuiri ·127 Views
  • Federal employee unions are suing the Treasury and alleging Elon Musk's DOGE gained illegal and 'unprecedented' access to data
    www.businessinsider.com
    Three federal employee unions filed a lawsuit against the Treasury Department on Monday.The unions accused Treasury Secretary Scott Bessent of sharing sensitive data with Elon Musk's DOGE.The White House said Musk "has abided by all applicable federal laws."Three federal employee unions have accused the Treasury Department of giving Elon Musk's Department of Government Efficiency unlawful access to sensitive information.In a lawsuit filed Monday, lawyers acting for the Alliance for Retired Americans, the American Federation of Government Employees, and the Service Employees International Union said Treasury Secretary Scott Bessent violated federal law when he shared the department's data with DOGE.The three union groups are represented by lawyers from the Public Citizen Litigation Group and the State Democracy Defenders Fund."Federal laws protect sensitive personal and financial information from improper disclosure and misuse, including by barring disclosure to individuals who lack a lawful and legitimate need for it," the lawyers wrote. "In his first week as Treasury Secretary, defendant Bessent violated these restrictions."According to the lawsuit, Musk and DOGE staffers had sought access to the Bureau of Fiscal Service's "records for some time, only to be rebuffed by the employee then in charge of the Bureau." The Bureau of Fiscal Service is a department within the Treasury that oversees all federal payments and collections.Bessent then put that employee on leave and gave DOGE's staffers "full access" to the Bureau's data and computers, the lawsuit said."The scale of the intrusion into individuals' privacy is massive and unprecedented," the lawsuit continued."People who must share information with the federal government should not be forced to share information with Elon Musk or his 'DOGE.' And federal law says they do not have to," the lawsuit added.Lawyers from the Public Citizen Litigation Group and State Democracy Defenders Fund did not respond to requests for comment from Business Insider.On Monday, President Donald Trump told reporters that Musk was given access to Treasury data so that he could identify wasteful government spending."He's got access only to letting people go that he thinks are no good, if we agree with him. And it's only if we agree with him," Trump said while signing executive orders in the Oval Office."Elon can't do and won't do anything without our approval and we will give him the approval where appropriate," Trump added.A White House spokesperson told BI on Monday that Musk was a "special government employee" and would not be paid for his services. According to federal law, special government employees cannot work for more than 130 days in a 365-day period.White House press secretary Karoline Leavitt told reporters on the same day that Musk has "abided by all applicable federal laws."The Treasury and Musk did not respond to requests for comment from BI.Shortly after winning the November election, Trump announced that Musk and biotech entrepreneur Vivek Ramaswamy would co-lead DOGE. Trump said in his announcement that the commission would cut wasteful federal spending and slash excess regulations.Last month, Ramaswamy said he was leaving DOGE, leaving Musk as its sole leader.In October, the Tesla and SpaceX CEO outlined his goals for DOGE. At a Trump campaign event, he said the committee would save the government at least $2 trillion, though he did not specify what cuts he would make to achieve that target.Achieving Musk's $2 trillion target would involve cutting government spending by nearly a third. The federal government spent $6.75 trillion in the 2024 fiscal year.In January, Musk appeared to walk back his estimate, saying that reducing $2 trillion in spending would be a "best-case outcome" for DOGE. The billionaire said in an interview with political strategist Mark Penn that the commission had a "good shot" at saving $1 trillion."If we can drop the budget deficit from $2 trillion to $1 trillion and free up the economy to have additional growth such that the output of goods and services keeps pace with the increase in the money supply, then there will be no inflation. So that, I think, would be an epic outcome," Musk told Penn.
    0 Commentarii ·0 Distribuiri ·141 Views