• The 3 most important KPIs running an on-device acquisition campaign

    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per actionIf it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained userIt’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV.
    #most #important #kpis #running #ondevice
    The 3 most important KPIs running an on-device acquisition campaign
    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per actionIf it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained userIt’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV. #most #important #kpis #running #ondevice
    UNITY.COM
    The 3 most important KPIs running an on-device acquisition campaign
    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per action (CPA)If it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained user (CPRU)It’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is $10k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a $50 per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV.
    Like
    Love
    Wow
    Angry
    Sad
    637
    0 Комментарии 0 Поделились
  • A federal court’s novel proposal to rein in Trump’s power grab

    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    #federal #courts #novel #proposal #rein
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Boardhears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the jobtheir unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More: #federal #courts #novel #proposal #rein
    WWW.VOX.COM
    A federal court’s novel proposal to rein in Trump’s power grab
    Limited-time offer: Get more than 30% off a Vox Membership. Join today to support independent journalism. Federal civil servants are supposed to enjoy robust protections against being fired or demoted for political reasons. But President Donald Trump has effectively stripped them of these protections by neutralizing the federal agencies that implement these safeguards.An agency known as the Merit Systems Protection Board (MSPB) hears civil servants’ claims that a “government employer discriminated against them, retaliated against them for whistleblowing, violated protections for veterans, or otherwise subjected them to an unlawful adverse employment action or prohibited personnel practice,” as a federal appeals court explained in an opinion on Tuesday. But the three-member board currently lacks the quorum it needs to operate because Trump fired two of the members.Trump also fired Hampton Dellinger, who until recently served as the special counsel of the United States, a role that investigates alleged violations of federal civil service protections and brings related cases to the MSPB. Trump recently nominated Paul Ingrassia, a far-right podcaster and recent law school graduate to replace Dellinger.The upshot of these firings is that no one in the government is able to enforce laws and regulations protecting civil servants. As Dellinger noted in an interview, the morning before a federal appeals court determined that Trump could fire him, he’d “been able to get 6,000 newly hired federal employees back on the job,” and was working to get “all probationary employees put back on the job [after] their unlawful firing” by the Department of Government Efficiency and other Trump administration efforts to cull the federal workforce. These and other efforts to reinstate illegally fired federal workers are on hold, and may not resume until Trump leaves office.Which brings us to the US Court of Appeals for the Fourth Circuit’s decision in National Association of Immigration Judges v. Owen, which proposes an innovative solution to this problem.As the Owen opinion notes, the Supreme Court has held that the MSPB process is the only process a federal worker can use if they believe they’ve been fired in violation of federal civil service laws. So if that process is shut down, the worker is out of luck.But the Fourth Circuit’s Owen opinion argues that this “conclusion can only be true…when the statute functions as Congress intended.” That is, if the MSPB and the special counsel are unable to “fulfill their roles prescribed by” federal law, then the courts should pick up the slack and start hearing cases brought by illegally fired civil servants.For procedural reasons, the Fourth Circuit’s decision will not take effect right away — the court sent the case back down to a trial judge to “conduct a factual inquiry” into whether the MSPB continues to function. And, even after that inquiry is complete, the Trump administration is likely to appeal the Fourth Circuit’s decision to the Supreme Court if it wants to keep civil service protections on ice.If the justices agree with the circuit court, however, that will close a legal loophole that has left federal civil servants unprotected by laws that are still very much on the books. And it will cure a problem that the Supreme Court bears much of the blame for creating.The “unitary executive,” or why the Supreme Court is to blame for the loss of civil service protectionsFederal law provides that Dellinger could “be removed by the President only for inefficiency, neglect of duty, or malfeasance in office,” and members of the MSPB enjoy similar protections against being fired. Trump’s decision to fire these officials was illegal under these laws.But a federal appeals court nonetheless permitted Trump to fire Dellinger, and the Supreme Court recently backed Trump’s decision to fire the MSPB members as well. The reason is a legal theory known as the “unitary executive,” which is popular among Republican legal scholars, and especially among the six Republicans that control the Supreme Court.If you want to know all the details of this theory, I can point you to three different explainers I’ve written on the unitary executive. The short explanation is that the unitary executive theory claims that the president must have the power to fire top political appointees charged with executing federal laws – including officials who execute laws protecting civil servants from illegal firings.But the Supreme Court has never claimed that the unitary executive permits the president to fire any federal worker regardless of whether Congress has protected them or not. In a seminal opinion laying out the unitary executive theory, for example, Justice Antonin Scalia argued that the president must have the power to remove “principal officers” — high-ranking officials like Dellinger who must be nominated by the president and confirmed by the Senate. Under Scalia’s approach, lower-ranking government workers may still be given some protection.The Fourth Circuit cannot override the Supreme Court’s decision to embrace the unitary executive theory. But the Owen opinion essentially tries to police the line drawn by Scalia. The Supreme Court has given Trump the power to fire some high-ranking officials, but he shouldn’t be able to use that power as a back door to eliminate job protections for all civil servants.The Fourth Circuit suggests that the federal law which simultaneously gave the MSPB exclusive authority over civil service disputes, while also protecting MSPB members from being fired for political reasons, must be read as a package. Congress, this argument goes, would not have agreed to shunt all civil service disputes to the MSPB if it had known that the Supreme Court would strip the MSPB of its independence. And so, if the MSPB loses its independence, it must also lose its exclusive authority over civil service disputes — and federal courts must regain the power to hear those cases.It remains to be seen whether this argument persuades a Republican Supreme Court — all three of the Fourth Circuit judges who decided the Owen case are Democrats, and two are Biden appointees. But the Fourth Circuit’s reasoning closely resembles the kind of inquiry that courts frequently engage in when a federal law is struck down.When a court declares a provision of federal law unconstitutional, it often needs to ask whether other parts of the law should fall along with the unconstitutional provision, an inquiry known as “severability.” Often, this severability analysis asks which hypothetical law Congress would have enacted if it had known that the one provision is invalid.The Fourth Circuit’s decision in Owen is essentially a severability opinion. It takes as a given the Supreme Court’s conclusion that laws protecting Dellinger and the MSPB members from being fired are unconstitutional, then asks which law Congress would have enacted if it had known that it could not protect MSPB members from political reprisal. The Fourth Circuit’s conclusion is that, if Congress had known that MSPB members cannot be politically independent, then it would not have given them exclusive authority over civil service disputes.If the Supreme Court permits Trump to neutralize the MSPB, that would fundamentally change how the government functionsThe idea that civil servants should be hired based on merit and insulated from political pressure is hardly new. The first law protecting civil servants, the Pendleton Civil Service Reform Act, which President Chester A. Arthur signed into law in 1883.Laws like the Pendleton Act do more than protect civil servants who, say, resist pressure to deny government services to the president’s enemies. They also make it possible for top government officials to actually do their jobs.Before the Pendleton Act, federal jobs were typically awarded as patronage — so when a Democratic administration took office, the Republicans who occupied most federal jobs would be fired and replaced by Democrats. This was obviously quite disruptive, and it made it difficult for the government to hire highly specialized workers. Why would someone go to the trouble of earning an economics degree and becoming an expert on federal monetary policy, if they knew that their job in the Treasury Department would disappear the minute their party lost an election?Meanwhile, the task of filling all of these patronage jobs overwhelmed new presidents. As Candice Millard wrote in a 2011 biography of President James A. Garfield, the last president elected before the Pendleton Act, when Garfield took office, a line of job seekers began to form outside the White House “before he even sat down to breakfast.” By the time Garfield had eaten, this line “snaked down the front walk, out the gate, and onto Pennsylvania Avenue.” Garfield was assassinated by a disgruntled job seeker, a fact that likely helped build political support for the Pendleton Act.By neutralizing the MSPB, Trump is effectively undoing nearly 150 years worth of civil service reforms, and returning the federal government to a much more primitive state. At the very least, the Fourth Circuit’s decision in Owen is likely to force the Supreme Court to ask if it really wants a century and a half of work to unravel.See More:
    Like
    Love
    Wow
    Sad
    Angry
    286
    0 Комментарии 0 Поделились
  • Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    News

    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    7 min read

    Published: June 4, 2025

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent.
    The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets.
    This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device.
    Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode.
    How Does It Work?
    Here’s the method used by Meta to spy on Android devices:

    As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour.
    When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites.
    However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background. 
    Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost. 
    Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms.

    The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites.
    Yandex also uses a similar method to harvest your personal data.

    Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone. 
    When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters. 
    These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1.
    Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains.
    The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK.

    Meta’s Infamous History with Privacy Norms
    This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations. 
    For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B. 
    Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission. 
    Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data. 
    In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users.
    In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally.
    So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place.
    That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities. 
    The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data.
    Meta’s Timid Response
    Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication. 

    We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson

    This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports.
    Here’s what will possibly happen next:

    A lawsuit may be filed based on the report.
    An investigating committee might be formed to question the matter.
    The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines.
    Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done. 

    The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes.
    More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data.  In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #meta #yandex #spying #android #users
    TECHREPORT.COM
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580-12585), on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying $1.4B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta $5B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid $90M and promised to delete the collected data.  In 2024, South Korea also fined Meta $15M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined $101.6M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’ (yep, that’s what they are calling it) as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    Like
    Love
    Wow
    Sad
    Angry
    193
    0 Комментарии 0 Поделились
  • How farmers can help rescue water-loving birds

    James Gentz has seen birds aplenty on his East Texas rice-and-crawfish farm: snow geese and pintails, spoonbills and teal. The whooping crane couple, though, he found “magnificent.” These endangered, long-necked behemoths arrived in 2021 and set to building a nest amid his flooded fields. “I just loved to see them,” Gentz says.

    Not every farmer is thrilled to host birds. Some worry about the spread of avian flu, others are concerned that the birds will eat too much of their valuable crops. But as an unstable climate delivers too little water, careening temperatures and chaotic storms, the fates of human food production and birds are ever more linked—with the same climate anomalies that harm birds hurting agriculture too.
    In some places, farmer cooperation is critical to the continued existence of whooping cranes and other wetland-dependent waterbird species, close to one-third of which are experiencing declines. Numbers of waterfowlhave crashed by 20 percent since 2014, and long-legged wading shorebirds like sandpipers have suffered steep population losses. Conservation-minded biologists, nonprofits, government agencies, and farmers themselves are amping up efforts to ensure that each species survives and thrives. With federal support in the crosshairs of the Trump administration, their work is more importantthan ever.
    Their collaborations, be they domestic or international, are highly specific, because different regions support different kinds of agriculture—grasslands, or deep or shallow wetlands, for example, favored by different kinds of birds. Key to the efforts is making it financially worthwhile for farmers to keep—or tweak—practices to meet bird forage and habitat needs.
    Traditional crawfish-and-rice farms in Louisiana, as well as in Gentz’s corner of Texas, mimic natural freshwater wetlands that are being lost to saltwater intrusion from sea level rise. Rice grows in fields that are flooded to keep weeds down; fields are drained for harvest by fall. They are then re-flooded to cover crawfish burrowed in the mud; these are harvested in early spring—and the cycle begins again.
    That second flooding coincides with fall migration—a genetic and learned behavior that determines where birds fly and when—and it lures massive numbers of egrets, herons, bitterns, and storks that dine on the crustaceans as well as on tadpoles, fish, and insects in the water.
    On a biodiverse crawfish-and-rice farm, “you can see 30, 40, 50 species of birds, amphibians, reptiles, everything,” says Elijah Wojohn, a shorebird conservation biologist at nonprofit Manomet Conservation Sciences in Massachusetts. In contrast, if farmers switch to less water-intensive corn and soybean production in response to climate pressures, “you’ll see raccoons, deer, crows, that’s about it.” Wojohn often relies on word-of-mouth to hook farmers on conservation; one learned to spot whimbrel, with their large, curved bills, got “fired up” about them and told all his farmer friends. Such farmer-to-farmer dialogue is how you change things among this sometimes change-averse group, Wojohn says.
    In the Mississippi Delta and in California, where rice is generally grown without crustaceans, conservation organizations like Ducks Unlimited have long boosted farmers’ income and staying power by helping them get paid to flood fields in winter for hunters. This attracts overwintering ducks and geese—considered an extra “crop”—that gobble leftover rice and pond plants; the birds also help to decompose rice stalks so farmers don’t have to remove them. Ducks Unlimited’s goal is simple, says director of conservation innovation Scott Manley: Keep rice farmers farming rice. This is especially important as a changing climate makes that harder. 2024 saw a huge push, with the organization conserving 1 million acres for waterfowl.
    Some strategies can backfire. In Central New York, where dwindling winter ice has seen waterfowl lingering past their habitual migration times, wildlife managers and land trusts are buying less productive farmland to plant with native grasses; these give migratory fuel to ducks when not much else is growing. But there’s potential for this to produce too many birds for the land available back in their breeding areas, says Andrew Dixon, director of science and conservation at the Mohamed Bin Zayed Raptor Conservation Fund in Abu Dhabi, and coauthor of an article about the genetics of bird migration in the 2024 Annual Review of Animal Biosciences. This can damage ecosystems meant to serve them.

    Recently, conservation efforts spanning continents and thousands of miles have sprung up. One seeks to protect buff-breasted sandpipers. As they migrate 18,000 miles to and from the High Arctic where they nest, the birds experience extreme hunger—hyperphagia—that compels them to voraciously devour insects in short grasses where the bugs proliferate. But many stops along the birds’ round-trip route are threatened. There are water shortages affecting agriculture in Texas, where the birds forage at turf grass farms; grassland loss and degradation in Paraguay; and in Colombia, conversion of forage lands to exotic grasses and rice paddies these birds cannot use.
    Conservationists say it’s critical to protect habitat for “buffies” all along their route, and to ensure that the winters these small shorebirds spend around Uruguay’s coastal lagoons are a food fiesta. To that end, Manomet conservation specialist Joaquín Aldabe, in partnership with Uruguay’s agriculture ministry, has so far taught 40 local ranchers how to improve their cattle grazing practices. Rotationally moving the animals from pasture to pasture means grasses stay the right length for insects to flourish.
    There are no easy fixes in the North American northwest, where bird conservation is in crisis. Extreme drought is causing breeding grounds, molting spots, and migration stopover sites to vanish. It is also endangering the livelihoods of farmers, who feel the push to sell land to developers. From Southern Oregon to Central California, conservation allies have provided monetary incentives for water-strapped grain farmers to leave behind harvest debris to improve survivability for the 1 billion birds that pass through every year, and for ranchers to flood-irrigate unused pastures.
    One treacherous leg of the northwest migration route is the parched Klamath Basin of Oregon and California. For three recent years, “we saw no migrating birds. I mean, the peak count was zero,” says John Vradenburg, supervisory biologist of the Klamath Basin National Wildlife Refuge Complex. He and myriad private, public, and Indigenous partners are working to conjure more water for the basin’s human and avian denizens, as perennial wetlands become seasonal wetlands, seasonal wetlands transition to temporary wetlands, and temporary wetlands turn to arid lands.
    Taking down four power dams and one levee has stretched the Klamath River’s water across the landscape, creating new streams and connecting farm fields to long-separated wetlands. But making the most of this requires expansive thinking. Wetland restoration—now endangered by loss of funding from the current administration—would help drought-afflicted farmers by keeping water tables high. But what if farmers could also receive extra money for their businesses via eco-credits, akin to carbon credits, for the work those wetlands do to filter-clean farm runoff? And what if wetlands could function as aquaculture incubators for juvenile fish, before stocking rivers? Klamath tribes are invested in restoring endangered c’waam and koptu sucker fish, and this could help them achieve that goal.
    As birds’ traditional resting and nesting spots become inhospitable, a more sobering question is whether improvements can happen rapidly enough. The blistering pace of climate change gives little chance for species to genetically adapt, although some are changing their behaviors. That means that the work of conservationists to find and secure adequate, supportive farmland and rangeland as the birds seek out new routes has become a sprint against time.
    This story originally appeared at Knowable Magazine.

    Lela Nargi, Knowable Magazine

    Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

    0 Comments
    #how #farmers #can #help #rescue
    How farmers can help rescue water-loving birds
    James Gentz has seen birds aplenty on his East Texas rice-and-crawfish farm: snow geese and pintails, spoonbills and teal. The whooping crane couple, though, he found “magnificent.” These endangered, long-necked behemoths arrived in 2021 and set to building a nest amid his flooded fields. “I just loved to see them,” Gentz says. Not every farmer is thrilled to host birds. Some worry about the spread of avian flu, others are concerned that the birds will eat too much of their valuable crops. But as an unstable climate delivers too little water, careening temperatures and chaotic storms, the fates of human food production and birds are ever more linked—with the same climate anomalies that harm birds hurting agriculture too. In some places, farmer cooperation is critical to the continued existence of whooping cranes and other wetland-dependent waterbird species, close to one-third of which are experiencing declines. Numbers of waterfowlhave crashed by 20 percent since 2014, and long-legged wading shorebirds like sandpipers have suffered steep population losses. Conservation-minded biologists, nonprofits, government agencies, and farmers themselves are amping up efforts to ensure that each species survives and thrives. With federal support in the crosshairs of the Trump administration, their work is more importantthan ever. Their collaborations, be they domestic or international, are highly specific, because different regions support different kinds of agriculture—grasslands, or deep or shallow wetlands, for example, favored by different kinds of birds. Key to the efforts is making it financially worthwhile for farmers to keep—or tweak—practices to meet bird forage and habitat needs. Traditional crawfish-and-rice farms in Louisiana, as well as in Gentz’s corner of Texas, mimic natural freshwater wetlands that are being lost to saltwater intrusion from sea level rise. Rice grows in fields that are flooded to keep weeds down; fields are drained for harvest by fall. They are then re-flooded to cover crawfish burrowed in the mud; these are harvested in early spring—and the cycle begins again. That second flooding coincides with fall migration—a genetic and learned behavior that determines where birds fly and when—and it lures massive numbers of egrets, herons, bitterns, and storks that dine on the crustaceans as well as on tadpoles, fish, and insects in the water. On a biodiverse crawfish-and-rice farm, “you can see 30, 40, 50 species of birds, amphibians, reptiles, everything,” says Elijah Wojohn, a shorebird conservation biologist at nonprofit Manomet Conservation Sciences in Massachusetts. In contrast, if farmers switch to less water-intensive corn and soybean production in response to climate pressures, “you’ll see raccoons, deer, crows, that’s about it.” Wojohn often relies on word-of-mouth to hook farmers on conservation; one learned to spot whimbrel, with their large, curved bills, got “fired up” about them and told all his farmer friends. Such farmer-to-farmer dialogue is how you change things among this sometimes change-averse group, Wojohn says. In the Mississippi Delta and in California, where rice is generally grown without crustaceans, conservation organizations like Ducks Unlimited have long boosted farmers’ income and staying power by helping them get paid to flood fields in winter for hunters. This attracts overwintering ducks and geese—considered an extra “crop”—that gobble leftover rice and pond plants; the birds also help to decompose rice stalks so farmers don’t have to remove them. Ducks Unlimited’s goal is simple, says director of conservation innovation Scott Manley: Keep rice farmers farming rice. This is especially important as a changing climate makes that harder. 2024 saw a huge push, with the organization conserving 1 million acres for waterfowl. Some strategies can backfire. In Central New York, where dwindling winter ice has seen waterfowl lingering past their habitual migration times, wildlife managers and land trusts are buying less productive farmland to plant with native grasses; these give migratory fuel to ducks when not much else is growing. But there’s potential for this to produce too many birds for the land available back in their breeding areas, says Andrew Dixon, director of science and conservation at the Mohamed Bin Zayed Raptor Conservation Fund in Abu Dhabi, and coauthor of an article about the genetics of bird migration in the 2024 Annual Review of Animal Biosciences. This can damage ecosystems meant to serve them. Recently, conservation efforts spanning continents and thousands of miles have sprung up. One seeks to protect buff-breasted sandpipers. As they migrate 18,000 miles to and from the High Arctic where they nest, the birds experience extreme hunger—hyperphagia—that compels them to voraciously devour insects in short grasses where the bugs proliferate. But many stops along the birds’ round-trip route are threatened. There are water shortages affecting agriculture in Texas, where the birds forage at turf grass farms; grassland loss and degradation in Paraguay; and in Colombia, conversion of forage lands to exotic grasses and rice paddies these birds cannot use. Conservationists say it’s critical to protect habitat for “buffies” all along their route, and to ensure that the winters these small shorebirds spend around Uruguay’s coastal lagoons are a food fiesta. To that end, Manomet conservation specialist Joaquín Aldabe, in partnership with Uruguay’s agriculture ministry, has so far taught 40 local ranchers how to improve their cattle grazing practices. Rotationally moving the animals from pasture to pasture means grasses stay the right length for insects to flourish. There are no easy fixes in the North American northwest, where bird conservation is in crisis. Extreme drought is causing breeding grounds, molting spots, and migration stopover sites to vanish. It is also endangering the livelihoods of farmers, who feel the push to sell land to developers. From Southern Oregon to Central California, conservation allies have provided monetary incentives for water-strapped grain farmers to leave behind harvest debris to improve survivability for the 1 billion birds that pass through every year, and for ranchers to flood-irrigate unused pastures. One treacherous leg of the northwest migration route is the parched Klamath Basin of Oregon and California. For three recent years, “we saw no migrating birds. I mean, the peak count was zero,” says John Vradenburg, supervisory biologist of the Klamath Basin National Wildlife Refuge Complex. He and myriad private, public, and Indigenous partners are working to conjure more water for the basin’s human and avian denizens, as perennial wetlands become seasonal wetlands, seasonal wetlands transition to temporary wetlands, and temporary wetlands turn to arid lands. Taking down four power dams and one levee has stretched the Klamath River’s water across the landscape, creating new streams and connecting farm fields to long-separated wetlands. But making the most of this requires expansive thinking. Wetland restoration—now endangered by loss of funding from the current administration—would help drought-afflicted farmers by keeping water tables high. But what if farmers could also receive extra money for their businesses via eco-credits, akin to carbon credits, for the work those wetlands do to filter-clean farm runoff? And what if wetlands could function as aquaculture incubators for juvenile fish, before stocking rivers? Klamath tribes are invested in restoring endangered c’waam and koptu sucker fish, and this could help them achieve that goal. As birds’ traditional resting and nesting spots become inhospitable, a more sobering question is whether improvements can happen rapidly enough. The blistering pace of climate change gives little chance for species to genetically adapt, although some are changing their behaviors. That means that the work of conservationists to find and secure adequate, supportive farmland and rangeland as the birds seek out new routes has become a sprint against time. This story originally appeared at Knowable Magazine. Lela Nargi, Knowable Magazine Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens. 0 Comments #how #farmers #can #help #rescue
    ARSTECHNICA.COM
    How farmers can help rescue water-loving birds
    James Gentz has seen birds aplenty on his East Texas rice-and-crawfish farm: snow geese and pintails, spoonbills and teal. The whooping crane couple, though, he found “magnificent.” These endangered, long-necked behemoths arrived in 2021 and set to building a nest amid his flooded fields. “I just loved to see them,” Gentz says. Not every farmer is thrilled to host birds. Some worry about the spread of avian flu, others are concerned that the birds will eat too much of their valuable crops. But as an unstable climate delivers too little water, careening temperatures and chaotic storms, the fates of human food production and birds are ever more linked—with the same climate anomalies that harm birds hurting agriculture too. In some places, farmer cooperation is critical to the continued existence of whooping cranes and other wetland-dependent waterbird species, close to one-third of which are experiencing declines. Numbers of waterfowl (think ducks and geese) have crashed by 20 percent since 2014, and long-legged wading shorebirds like sandpipers have suffered steep population losses. Conservation-minded biologists, nonprofits, government agencies, and farmers themselves are amping up efforts to ensure that each species survives and thrives. With federal support in the crosshairs of the Trump administration, their work is more important (and threatened) than ever. Their collaborations, be they domestic or international, are highly specific, because different regions support different kinds of agriculture—grasslands, or deep or shallow wetlands, for example, favored by different kinds of birds. Key to the efforts is making it financially worthwhile for farmers to keep—or tweak—practices to meet bird forage and habitat needs. Traditional crawfish-and-rice farms in Louisiana, as well as in Gentz’s corner of Texas, mimic natural freshwater wetlands that are being lost to saltwater intrusion from sea level rise. Rice grows in fields that are flooded to keep weeds down; fields are drained for harvest by fall. They are then re-flooded to cover crawfish burrowed in the mud; these are harvested in early spring—and the cycle begins again. That second flooding coincides with fall migration—a genetic and learned behavior that determines where birds fly and when—and it lures massive numbers of egrets, herons, bitterns, and storks that dine on the crustaceans as well as on tadpoles, fish, and insects in the water. On a biodiverse crawfish-and-rice farm, “you can see 30, 40, 50 species of birds, amphibians, reptiles, everything,” says Elijah Wojohn, a shorebird conservation biologist at nonprofit Manomet Conservation Sciences in Massachusetts. In contrast, if farmers switch to less water-intensive corn and soybean production in response to climate pressures, “you’ll see raccoons, deer, crows, that’s about it.” Wojohn often relies on word-of-mouth to hook farmers on conservation; one learned to spot whimbrel, with their large, curved bills, got “fired up” about them and told all his farmer friends. Such farmer-to-farmer dialogue is how you change things among this sometimes change-averse group, Wojohn says. In the Mississippi Delta and in California, where rice is generally grown without crustaceans, conservation organizations like Ducks Unlimited have long boosted farmers’ income and staying power by helping them get paid to flood fields in winter for hunters. This attracts overwintering ducks and geese—considered an extra “crop”—that gobble leftover rice and pond plants; the birds also help to decompose rice stalks so farmers don’t have to remove them. Ducks Unlimited’s goal is simple, says director of conservation innovation Scott Manley: Keep rice farmers farming rice. This is especially important as a changing climate makes that harder. 2024 saw a huge push, with the organization conserving 1 million acres for waterfowl. Some strategies can backfire. In Central New York, where dwindling winter ice has seen waterfowl lingering past their habitual migration times, wildlife managers and land trusts are buying less productive farmland to plant with native grasses; these give migratory fuel to ducks when not much else is growing. But there’s potential for this to produce too many birds for the land available back in their breeding areas, says Andrew Dixon, director of science and conservation at the Mohamed Bin Zayed Raptor Conservation Fund in Abu Dhabi, and coauthor of an article about the genetics of bird migration in the 2024 Annual Review of Animal Biosciences. This can damage ecosystems meant to serve them. Recently, conservation efforts spanning continents and thousands of miles have sprung up. One seeks to protect buff-breasted sandpipers. As they migrate 18,000 miles to and from the High Arctic where they nest, the birds experience extreme hunger—hyperphagia—that compels them to voraciously devour insects in short grasses where the bugs proliferate. But many stops along the birds’ round-trip route are threatened. There are water shortages affecting agriculture in Texas, where the birds forage at turf grass farms; grassland loss and degradation in Paraguay; and in Colombia, conversion of forage lands to exotic grasses and rice paddies these birds cannot use. Conservationists say it’s critical to protect habitat for “buffies” all along their route, and to ensure that the winters these small shorebirds spend around Uruguay’s coastal lagoons are a food fiesta. To that end, Manomet conservation specialist Joaquín Aldabe, in partnership with Uruguay’s agriculture ministry, has so far taught 40 local ranchers how to improve their cattle grazing practices. Rotationally moving the animals from pasture to pasture means grasses stay the right length for insects to flourish. There are no easy fixes in the North American northwest, where bird conservation is in crisis. Extreme drought is causing breeding grounds, molting spots, and migration stopover sites to vanish. It is also endangering the livelihoods of farmers, who feel the push to sell land to developers. From Southern Oregon to Central California, conservation allies have provided monetary incentives for water-strapped grain farmers to leave behind harvest debris to improve survivability for the 1 billion birds that pass through every year, and for ranchers to flood-irrigate unused pastures. One treacherous leg of the northwest migration route is the parched Klamath Basin of Oregon and California. For three recent years, “we saw no migrating birds. I mean, the peak count was zero,” says John Vradenburg, supervisory biologist of the Klamath Basin National Wildlife Refuge Complex. He and myriad private, public, and Indigenous partners are working to conjure more water for the basin’s human and avian denizens, as perennial wetlands become seasonal wetlands, seasonal wetlands transition to temporary wetlands, and temporary wetlands turn to arid lands. Taking down four power dams and one levee has stretched the Klamath River’s water across the landscape, creating new streams and connecting farm fields to long-separated wetlands. But making the most of this requires expansive thinking. Wetland restoration—now endangered by loss of funding from the current administration—would help drought-afflicted farmers by keeping water tables high. But what if farmers could also receive extra money for their businesses via eco-credits, akin to carbon credits, for the work those wetlands do to filter-clean farm runoff? And what if wetlands could function as aquaculture incubators for juvenile fish, before stocking rivers? Klamath tribes are invested in restoring endangered c’waam and koptu sucker fish, and this could help them achieve that goal. As birds’ traditional resting and nesting spots become inhospitable, a more sobering question is whether improvements can happen rapidly enough. The blistering pace of climate change gives little chance for species to genetically adapt, although some are changing their behaviors. That means that the work of conservationists to find and secure adequate, supportive farmland and rangeland as the birds seek out new routes has become a sprint against time. This story originally appeared at Knowable Magazine. Lela Nargi, Knowable Magazine Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens. 0 Comments
    0 Комментарии 0 Поделились
  • Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It

    Users
    Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It
    The TAKE IT DOWN Act passed with bipartisan support and glowing coverage. Experts warn that it threatens the very users it claims to protect.

    By

    Nitish Pahwa

    Enter your email to receive alerts for this author.

    Sign in or create an account to better manage your email preferences.

    May 22, 20252:03 PM

    Donald and Melania Trump during the signing of the TAKE IT DOWN Act at the White House on Monday.
    Jim Watson/AFP via Getty Images

    Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
    Had you scanned any of the latest headlines around the TAKE IT DOWN Act, legislation that President Donald Trump signed into law Monday, you would have come away with a deeply mistaken impression of the bill and its true purpose.
    The surface-level pitch is that this is a necessary law for addressing nonconsensual intimate images—known more widely as revenge porn. Obfuscating its intent with a classic congressional acronym, the TAKE IT DOWN Act purports to help scrub the internet of exploitative, nonconsensual sexual media, whether real or digitally mocked up, at a time when artificial intelligence tools and automated image generators have supercharged its spread. Enforcement is delegated to the Federal Trade Commission, which will give online communities that specialize primarily in user-generated contenta heads-up and a 48-hour takedown deadline whenever an appropriate example is reported. These platforms have also been directed to set up on-site reporting systems by May 2026. Penalties for violations include prison sentences of two to three years and steep monetary fines.

    Public reception has been rapturous. CNN is gushing that “victims of explicit deepfakes will now be able to take legal action against people who create them.” A few local Fox affiliates are taking the government at its word that TAKE IT DOWN is designed to target revenge porn. Other outlets, like the BBC and USA Today, led off by noting first lady Melania Trump’s appearance at the bill signing.
    Yet these headlines and pieces ignore TAKE IT DOWN’s serious potential for abuse.Rarer still, with the exception of sites like the Verge, has there been any acknowledgment of Trump’s own stated motivation for passing the act, as he’d underlined in a joint address to Congress in March: “I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.”
    Sure, it’s typical for this president to make such serious matters about himself. But Trump’s blathering about having it “worse” than revenge-porn survivors, and his quip about “using that bill for myself,” is not a fluke. For a while now, activists who specialize in free speech, digital privacy, and even stopping child sexual abuse have attempted to warn that the bill will not do what it purports to do.
    Late last month, after TAKE IT DOWN had passed both the House and Senate, the Electronic Frontier Foundation wrote that the bill’s legislative mechanism “lacks critical safeguards against frivolous or bad-faith takedown requests.” For one, the 48-hour takedown deadline means that digital platformswill be forced to use automated filters that often flag legal content—because there won’t be “enough time to verify whether the speech is actually illegal.” The EFF also warns that TAKE IT DOWN requires monitoring that could reach into even encrypted messages between users. If this legislation has the effect of granting law enforcement a means of bypassing encrypted communications, we may as well bid farewell to the very concept of digital privacy.
    A February letter addressed to the Senate from a wide range of free-expression nonprofits—including Fight for the Future and the Authors Guild—also raised concerns over TAKE IT DOWN’s implications for content moderation and encryption. The groups noted that although the bill makes allowances for legal porn and newsworthy content, “those exceptions are not included in the bill’s takedown system.” They added that private tools like direct messages and cloud storage aren’t protected either, which could leave them open to invasive monitoring with little justification. The Center for Democracy and Technology, a signatory to the letter, later noted in a follow-up statement that the powers granted to the FTC in enforcing such a vague law could lead to politically motivated attacks, undermining progress in tackling actual nonconsensual imagery.
    Techdirt’s Mike Masnick wrote last month that TAKE IT DOWN is “so badly designed that the people it’s meant to help oppose it,” pointing to public statements from the advocacy group Cyber Civil Rights Initiative, “whose entire existence is based on representing the interests of victims” of nonconsensual intimate imagery. CCRI has long criticized the bill’s takedown provisions and ultimately concluded that the nonprofit “cannot support legislation that risks endangering the very communities it is dedicated to protecting, including LGBTQIA+ individuals, people of color, and other vulnerable groups.”“The concerns are not theoretical,” Masnick continued. “The bill’s vague standards combined with harsh criminal penalties create a perfect storm for censorship and abuse.”

    Related From Slate

    Let’s be clear: No one here is at all opposed to sound legislation that tackles the inescapable, undeniable problem of nonconsensual sexual material. All 50 states, along with the District of Columbia, have enacted laws criminalizing exploitative sexual photos and videos to varying degrees. TAKE IT DOWN extends such coverage to deepfake revenge porn, a change that makes the bill a necessary complement to these state laws—but its text is shockingly narrow on the digital front, criminalizing only A.I. imagery that’s deemed to be “indistinguishable from an authentic visual depiction.” This just leads to more vague language that hardly addresses the underlying issue.
    The CCRI has spent a full decade fighting for laws to address the crisis of nonconsensual sexual imagery, even drafting model legislation—parts of which did make it into TAKE IT DOWN. On Bluesky, CCRI President Mary Anne Franks called this fact “bittersweet,” proclaiming that the long-overdue criminalization of exploitative sexual imagery is undermined by the final law’s “lack of adequate safeguards against false reports.” A few House Democrats looked to the group’s proposed fixes and attempted to pass amendments that would have added such safeguards, only to be obstructed by their Republican colleagues.
    This should worry everyone. These groups made concerted efforts to inform Congress of the issues with TAKE IT DOWN and to propose solutions, only to be all but ignored. As Masnick wrote in another Techdirt post, the United States already has enough of a problem with the infamous Digital Millennium Copyright Act, the only other American law with a notice-and-takedown measure like TAKE IT DOWN’s, albeit designed to prevent the unauthorized spread of copyright works. Just ask any creatives or platform operators who’ve had to deal with abusive flurries of bad-faith DMCA takedown requests—even though the law includes a clause meant to protect against such weaponization. There’s no reason to believe that TAKE IT DOWN won’t be similarly exploited to go after sex workers and LGBTQ+ users, as well as anyone who posts an image or animation that another user simply doesn’t like and decides to report. It’s not dissimilar to other pieces of proposed legislation, like the Kids Online Safety Act, that purport to protect young netizens via wishy-washy terms that could criminalize all sorts of free expression.

    Popular in

    Technology

    Here’s a hypothetical: A satirical cartoonist comes up with an illustration of Trump as a baby and publishes it on a niche social media platform that they use to showcase their art. A Trump supporter finds this cartoon and decides to report it as abusive pornography, leading to a takedown notice on the cartoonist’s website. The artist and the platform do not comply, and a pissed-off Trump brings the full force of the law against this creator. The process of discovery leads prosecutors to break into the artist’s encrypted communications, revealing drafts of the drawing that the cartoonist had shared with friends. All of this gets the illustrator punished with a brief prison sentence and steep fine, fully sabotaging their bank account and career; the social media platform they used is left bankrupt and shutters. The artists are forced to migrate to another site, whose administrators see what happened to their former home and decide to censor political works. All the while, an underage user finds that their likeness has been used to generate a sexually explicit deepfake that has been spread all over Discord—yet their case is found to have no merit because the deepfake in question is not considered “indistinguishable from an authentic visual depiction,” despite all the Discord-based abusers recognizing exactly whom that deepfake is meant to represent.
    It’s a hypothetical—but not an unimaginable one. It’s a danger that too few Americans understand, thanks to congressional ignorance and the media’s credulous reporting on TAKE IT DOWN. The result is a law that’s supposedly meant to protect the vulnerable but ends up shielding the powerful—and punishing the very people it promised to help.

    Get the best of news and politics
    Sign up for Slate's evening newsletter.
    #congress #passed #sweeping #freespeech #crackdownand
    Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It
    Users Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It The TAKE IT DOWN Act passed with bipartisan support and glowing coverage. Experts warn that it threatens the very users it claims to protect. By Nitish Pahwa Enter your email to receive alerts for this author. Sign in or create an account to better manage your email preferences. May 22, 20252:03 PM Donald and Melania Trump during the signing of the TAKE IT DOWN Act at the White House on Monday. Jim Watson/AFP via Getty Images Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Had you scanned any of the latest headlines around the TAKE IT DOWN Act, legislation that President Donald Trump signed into law Monday, you would have come away with a deeply mistaken impression of the bill and its true purpose. The surface-level pitch is that this is a necessary law for addressing nonconsensual intimate images—known more widely as revenge porn. Obfuscating its intent with a classic congressional acronym, the TAKE IT DOWN Act purports to help scrub the internet of exploitative, nonconsensual sexual media, whether real or digitally mocked up, at a time when artificial intelligence tools and automated image generators have supercharged its spread. Enforcement is delegated to the Federal Trade Commission, which will give online communities that specialize primarily in user-generated contenta heads-up and a 48-hour takedown deadline whenever an appropriate example is reported. These platforms have also been directed to set up on-site reporting systems by May 2026. Penalties for violations include prison sentences of two to three years and steep monetary fines. Public reception has been rapturous. CNN is gushing that “victims of explicit deepfakes will now be able to take legal action against people who create them.” A few local Fox affiliates are taking the government at its word that TAKE IT DOWN is designed to target revenge porn. Other outlets, like the BBC and USA Today, led off by noting first lady Melania Trump’s appearance at the bill signing. Yet these headlines and pieces ignore TAKE IT DOWN’s serious potential for abuse.Rarer still, with the exception of sites like the Verge, has there been any acknowledgment of Trump’s own stated motivation for passing the act, as he’d underlined in a joint address to Congress in March: “I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.” Sure, it’s typical for this president to make such serious matters about himself. But Trump’s blathering about having it “worse” than revenge-porn survivors, and his quip about “using that bill for myself,” is not a fluke. For a while now, activists who specialize in free speech, digital privacy, and even stopping child sexual abuse have attempted to warn that the bill will not do what it purports to do. Late last month, after TAKE IT DOWN had passed both the House and Senate, the Electronic Frontier Foundation wrote that the bill’s legislative mechanism “lacks critical safeguards against frivolous or bad-faith takedown requests.” For one, the 48-hour takedown deadline means that digital platformswill be forced to use automated filters that often flag legal content—because there won’t be “enough time to verify whether the speech is actually illegal.” The EFF also warns that TAKE IT DOWN requires monitoring that could reach into even encrypted messages between users. If this legislation has the effect of granting law enforcement a means of bypassing encrypted communications, we may as well bid farewell to the very concept of digital privacy. A February letter addressed to the Senate from a wide range of free-expression nonprofits—including Fight for the Future and the Authors Guild—also raised concerns over TAKE IT DOWN’s implications for content moderation and encryption. The groups noted that although the bill makes allowances for legal porn and newsworthy content, “those exceptions are not included in the bill’s takedown system.” They added that private tools like direct messages and cloud storage aren’t protected either, which could leave them open to invasive monitoring with little justification. The Center for Democracy and Technology, a signatory to the letter, later noted in a follow-up statement that the powers granted to the FTC in enforcing such a vague law could lead to politically motivated attacks, undermining progress in tackling actual nonconsensual imagery. Techdirt’s Mike Masnick wrote last month that TAKE IT DOWN is “so badly designed that the people it’s meant to help oppose it,” pointing to public statements from the advocacy group Cyber Civil Rights Initiative, “whose entire existence is based on representing the interests of victims” of nonconsensual intimate imagery. CCRI has long criticized the bill’s takedown provisions and ultimately concluded that the nonprofit “cannot support legislation that risks endangering the very communities it is dedicated to protecting, including LGBTQIA+ individuals, people of color, and other vulnerable groups.”“The concerns are not theoretical,” Masnick continued. “The bill’s vague standards combined with harsh criminal penalties create a perfect storm for censorship and abuse.” Related From Slate Let’s be clear: No one here is at all opposed to sound legislation that tackles the inescapable, undeniable problem of nonconsensual sexual material. All 50 states, along with the District of Columbia, have enacted laws criminalizing exploitative sexual photos and videos to varying degrees. TAKE IT DOWN extends such coverage to deepfake revenge porn, a change that makes the bill a necessary complement to these state laws—but its text is shockingly narrow on the digital front, criminalizing only A.I. imagery that’s deemed to be “indistinguishable from an authentic visual depiction.” This just leads to more vague language that hardly addresses the underlying issue. The CCRI has spent a full decade fighting for laws to address the crisis of nonconsensual sexual imagery, even drafting model legislation—parts of which did make it into TAKE IT DOWN. On Bluesky, CCRI President Mary Anne Franks called this fact “bittersweet,” proclaiming that the long-overdue criminalization of exploitative sexual imagery is undermined by the final law’s “lack of adequate safeguards against false reports.” A few House Democrats looked to the group’s proposed fixes and attempted to pass amendments that would have added such safeguards, only to be obstructed by their Republican colleagues. This should worry everyone. These groups made concerted efforts to inform Congress of the issues with TAKE IT DOWN and to propose solutions, only to be all but ignored. As Masnick wrote in another Techdirt post, the United States already has enough of a problem with the infamous Digital Millennium Copyright Act, the only other American law with a notice-and-takedown measure like TAKE IT DOWN’s, albeit designed to prevent the unauthorized spread of copyright works. Just ask any creatives or platform operators who’ve had to deal with abusive flurries of bad-faith DMCA takedown requests—even though the law includes a clause meant to protect against such weaponization. There’s no reason to believe that TAKE IT DOWN won’t be similarly exploited to go after sex workers and LGBTQ+ users, as well as anyone who posts an image or animation that another user simply doesn’t like and decides to report. It’s not dissimilar to other pieces of proposed legislation, like the Kids Online Safety Act, that purport to protect young netizens via wishy-washy terms that could criminalize all sorts of free expression. Popular in Technology Here’s a hypothetical: A satirical cartoonist comes up with an illustration of Trump as a baby and publishes it on a niche social media platform that they use to showcase their art. A Trump supporter finds this cartoon and decides to report it as abusive pornography, leading to a takedown notice on the cartoonist’s website. The artist and the platform do not comply, and a pissed-off Trump brings the full force of the law against this creator. The process of discovery leads prosecutors to break into the artist’s encrypted communications, revealing drafts of the drawing that the cartoonist had shared with friends. All of this gets the illustrator punished with a brief prison sentence and steep fine, fully sabotaging their bank account and career; the social media platform they used is left bankrupt and shutters. The artists are forced to migrate to another site, whose administrators see what happened to their former home and decide to censor political works. All the while, an underage user finds that their likeness has been used to generate a sexually explicit deepfake that has been spread all over Discord—yet their case is found to have no merit because the deepfake in question is not considered “indistinguishable from an authentic visual depiction,” despite all the Discord-based abusers recognizing exactly whom that deepfake is meant to represent. It’s a hypothetical—but not an unimaginable one. It’s a danger that too few Americans understand, thanks to congressional ignorance and the media’s credulous reporting on TAKE IT DOWN. The result is a law that’s supposedly meant to protect the vulnerable but ends up shielding the powerful—and punishing the very people it promised to help. Get the best of news and politics Sign up for Slate's evening newsletter. #congress #passed #sweeping #freespeech #crackdownand
    SLATE.COM
    Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It
    Users Congress Passed a Sweeping Free-Speech Crackdown—and No One’s Talking About It The TAKE IT DOWN Act passed with bipartisan support and glowing coverage. Experts warn that it threatens the very users it claims to protect. By Nitish Pahwa Enter your email to receive alerts for this author. Sign in or create an account to better manage your email preferences. May 22, 20252:03 PM Donald and Melania Trump during the signing of the TAKE IT DOWN Act at the White House on Monday. Jim Watson/AFP via Getty Images Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Had you scanned any of the latest headlines around the TAKE IT DOWN Act, legislation that President Donald Trump signed into law Monday, you would have come away with a deeply mistaken impression of the bill and its true purpose. The surface-level pitch is that this is a necessary law for addressing nonconsensual intimate images—known more widely as revenge porn. Obfuscating its intent with a classic congressional acronym (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks), the TAKE IT DOWN Act purports to help scrub the internet of exploitative, nonconsensual sexual media, whether real or digitally mocked up, at a time when artificial intelligence tools and automated image generators have supercharged its spread. Enforcement is delegated to the Federal Trade Commission, which will give online communities that specialize primarily in user-generated content (e.g., social media, message boards) a heads-up and a 48-hour takedown deadline whenever an appropriate example is reported. These platforms have also been directed to set up on-site reporting systems by May 2026. Penalties for violations include prison sentences of two to three years and steep monetary fines. Public reception has been rapturous. CNN is gushing that “victims of explicit deepfakes will now be able to take legal action against people who create them.” A few local Fox affiliates are taking the government at its word that TAKE IT DOWN is designed to target revenge porn. Other outlets, like the BBC and USA Today, led off by noting first lady Melania Trump’s appearance at the bill signing. Yet these headlines and pieces ignore TAKE IT DOWN’s serious potential for abuse. (Jezebel and Wired were perhaps the only publications to point out in both a headline and subhead that the law merely “claims to offer victims greater protections” and that “free speech advocates warn it could be weaponized to fuel censorship.”) Rarer still, with the exception of sites like the Verge, has there been any acknowledgment of Trump’s own stated motivation for passing the act, as he’d underlined in a joint address to Congress in March: “I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.” Sure, it’s typical for this president to make such serious matters about himself. But Trump’s blathering about having it “worse” than revenge-porn survivors, and his quip about “using that bill for myself,” is not a fluke. For a while now, activists who specialize in free speech, digital privacy, and even stopping child sexual abuse have attempted to warn that the bill will not do what it purports to do. Late last month, after TAKE IT DOWN had passed both the House and Senate, the Electronic Frontier Foundation wrote that the bill’s legislative mechanism “lacks critical safeguards against frivolous or bad-faith takedown requests.” For one, the 48-hour takedown deadline means that digital platforms (especially smaller, less-resourced websites) will be forced to use automated filters that often flag legal content—because there won’t be “enough time to verify whether the speech is actually illegal.” The EFF also warns that TAKE IT DOWN requires monitoring that could reach into even encrypted messages between users. If this legislation has the effect of granting law enforcement a means of bypassing encrypted communications, we may as well bid farewell to the very concept of digital privacy. A February letter addressed to the Senate from a wide range of free-expression nonprofits—including Fight for the Future and the Authors Guild—also raised concerns over TAKE IT DOWN’s implications for content moderation and encryption. The groups noted that although the bill makes allowances for legal porn and newsworthy content, “those exceptions are not included in the bill’s takedown system.” They added that private tools like direct messages and cloud storage aren’t protected either, which could leave them open to invasive monitoring with little justification. The Center for Democracy and Technology, a signatory to the letter, later noted in a follow-up statement that the powers granted to the FTC in enforcing such a vague law could lead to politically motivated attacks, undermining progress in tackling actual nonconsensual imagery. Techdirt’s Mike Masnick wrote last month that TAKE IT DOWN is “so badly designed that the people it’s meant to help oppose it,” pointing to public statements from the advocacy group Cyber Civil Rights Initiative, “whose entire existence is based on representing the interests of victims” of nonconsensual intimate imagery. CCRI has long criticized the bill’s takedown provisions and ultimately concluded that the nonprofit “cannot support legislation that risks endangering the very communities it is dedicated to protecting, including LGBTQIA+ individuals, people of color, and other vulnerable groups.” (In a separate statement, the CCRI highlighted other oddities within the bill, like a loophole allowing for nonconsensual sexual media to be posted if the uploader happens to appear in the image, and the explicit inclusion of forums that specialize in “audio files,” despite otherwise focusing on visual materials.) “The concerns are not theoretical,” Masnick continued. “The bill’s vague standards combined with harsh criminal penalties create a perfect storm for censorship and abuse.” Related From Slate Let’s be clear: No one here is at all opposed to sound legislation that tackles the inescapable, undeniable problem of nonconsensual sexual material. All 50 states, along with the District of Columbia, have enacted laws criminalizing exploitative sexual photos and videos to varying degrees. TAKE IT DOWN extends such coverage to deepfake revenge porn, a change that makes the bill a necessary complement to these state laws—but its text is shockingly narrow on the digital front, criminalizing only A.I. imagery that’s deemed to be “indistinguishable from an authentic visual depiction.” This just leads to more vague language that hardly addresses the underlying issue. The CCRI has spent a full decade fighting for laws to address the crisis of nonconsensual sexual imagery, even drafting model legislation—parts of which did make it into TAKE IT DOWN. On Bluesky, CCRI President Mary Anne Franks called this fact “bittersweet,” proclaiming that the long-overdue criminalization of exploitative sexual imagery is undermined by the final law’s “lack of adequate safeguards against false reports.” A few House Democrats looked to the group’s proposed fixes and attempted to pass amendments that would have added such safeguards, only to be obstructed by their Republican colleagues. This should worry everyone. These groups made concerted efforts to inform Congress of the issues with TAKE IT DOWN and to propose solutions, only to be all but ignored. As Masnick wrote in another Techdirt post, the United States already has enough of a problem with the infamous Digital Millennium Copyright Act, the only other American law with a notice-and-takedown measure like TAKE IT DOWN’s, albeit designed to prevent the unauthorized spread of copyright works. Just ask any creatives or platform operators who’ve had to deal with abusive flurries of bad-faith DMCA takedown requests—even though the law includes a clause meant to protect against such weaponization. There’s no reason to believe that TAKE IT DOWN won’t be similarly exploited to go after sex workers and LGBTQ+ users, as well as anyone who posts an image or animation that another user simply doesn’t like and decides to report. It’s not dissimilar to other pieces of proposed legislation, like the Kids Online Safety Act, that purport to protect young netizens via wishy-washy terms that could criminalize all sorts of free expression. Popular in Technology Here’s a hypothetical: A satirical cartoonist comes up with an illustration of Trump as a baby and publishes it on a niche social media platform that they use to showcase their art. A Trump supporter finds this cartoon and decides to report it as abusive pornography, leading to a takedown notice on the cartoonist’s website. The artist and the platform do not comply, and a pissed-off Trump brings the full force of the law against this creator. The process of discovery leads prosecutors to break into the artist’s encrypted communications, revealing drafts of the drawing that the cartoonist had shared with friends. All of this gets the illustrator punished with a brief prison sentence and steep fine, fully sabotaging their bank account and career; the social media platform they used is left bankrupt and shutters. The artists are forced to migrate to another site, whose administrators see what happened to their former home and decide to censor political works. All the while, an underage user finds that their likeness has been used to generate a sexually explicit deepfake that has been spread all over Discord—yet their case is found to have no merit because the deepfake in question is not considered “indistinguishable from an authentic visual depiction,” despite all the Discord-based abusers recognizing exactly whom that deepfake is meant to represent. It’s a hypothetical—but not an unimaginable one. It’s a danger that too few Americans understand, thanks to congressional ignorance and the media’s credulous reporting on TAKE IT DOWN. The result is a law that’s supposedly meant to protect the vulnerable but ends up shielding the powerful—and punishing the very people it promised to help. Get the best of news and politics Sign up for Slate's evening newsletter.
    0 Комментарии 0 Поделились
  • The Ordinary and Uncommon lift the lid on beauty endorsements

    On the surface, the towering pile of fake banknotes stacked in the window of a glistening skincare store could be interpreted as a marketing stunt, but behind the glass, the message is serious. The Cost of Influence – a new physical installation created by Uncommon for The Ordinary – was designed to highlight the hidden fees consumers pay for celebrity endorsements in the beauty industry, not tucked away in the ingredients list but embedded in the price.
    The campaign, live during the brand's flagship relaunch, makes The Ordinary's positioning crystal clear: no inflated costs, no gimmicks, just science-backed skincare. "What the industry is trying to keep under wraps is the complete opposite of transparency," says Joe Sare, art director at Uncommon. "Brands are paying significant sums of money for celebrities to endorse their products and passing those costs onto the consumer. So, we decided to reveal this 'secret ingredient' to the world, to reinforce the brand's commitment to being truly open."
    Far from a traditional ad campaign, the team leaned into something more visceral that people could touch, feel, and share. At the heart of the installation is a cold, almost sterile stack of imitation cash, deliberately stripped of the polish we might associate with retail window dressing.
    "A huge pile of money feels quite cold, and at first glance, it could almost look like a pile of rubbish," Joe explains. "Then, your eyes are drawn to the words on the window that explain what you're seeing, and the penny drops – no pun intended."

    The display's impact lies in its tension: blending stark visual simplicity with an idea that demands a second thought. Uncommon consciously leaned into that duality.
    "While the installation was physical, we went into the process with social virality in mind," says Joe. "We did play around with other, more intricate ways of showing the money – it flying around in a box, shapes other than the pile, making it interactive – but we landed on this execution as the most honest and impactful way of telling the story."
    On the glass, bold copy spells out that one of the most expensive ingredients in many beauty products is influence. Alongside the pile are tongue-in-cheek "price tags" assigning monetary values to fictional endorsements—the going rate, perhaps, for a 'celebrity serum' or moisturiser marketed by your favourite A-lister.
    Rather than preaching or pointing fingers, the tone is playful and inclusive. "We're ultimately on the audiences' side," says Marco Del Valle, Planning Director at Uncommon. "This isn't about judging them for buying other brands, but revealing something to them that they likely were not aware of."

    For a brand like The Ordinary – whose identity is rooted in radical transparency – the message isn't an opportunistic call-out but a reflection of its founding values. "The Ordinary is not anti-celebrity," Marco continues, "but it is against using unnecessary, bolt-on ingredients that ultimately cost customers more… and often the most expensive ingredient is a celebrity endorsement."
    That clarity of perspective helped shape the creative direction. According to the Uncommon team, the collaboration was genuinely collaborative—not just signed off but co-authored. "They're a dream to work alongside," says Joe. The entire team has such a strong sense of what the brand stands for and a deep passion for bringing that to life. We were on the journey together at every stage… from the creative to the messaging. It's a true partnership."
    The work also taps into a broader shift in what audiences want and expect from brands, especially in beauty and wellness, where the mood is turning from aspiration to honesty.
    "We've always set out to build the brands people wish existed," says Marco. "And for us, a big part of this is doing work that uncovers and/or addresses real cultural tensions. Every single category has multiple tensions and untold stories within it – the world of beauty is no exception."

    What's striking is how the piece walks the line between creative expression and brand activism without slipping into moralising. Instead of issuing a lecture, it sparks a conversation, both on the high street and online, where its simplicity proved especially shareable. According to Joe, most passers-by stopped to take pictures, with social sentiment "overwhelmingly positive."
    For Uncommon, it marks another step in its evolution as a studio that blends brand storytelling with cultural critique. As Marco puts it: "We are living in an increasingly fragmented, hyper-visual reality. Social media has shortened our attention spans and increased the need for brands to create thumb-stopping content.
    "On top of this, in the current socio-economic climate, consumers are becoming more discerning about the companies they choose to engage with. They're looking for brands with depth, brands that stand for something."
    In that sense, The Cost of Influence is a provocation, holding a mirror up to an industry and nudging us to question what we're buying into.
    #ordinary #uncommon #lift #lid #beauty
    The Ordinary and Uncommon lift the lid on beauty endorsements
    On the surface, the towering pile of fake banknotes stacked in the window of a glistening skincare store could be interpreted as a marketing stunt, but behind the glass, the message is serious. The Cost of Influence – a new physical installation created by Uncommon for The Ordinary – was designed to highlight the hidden fees consumers pay for celebrity endorsements in the beauty industry, not tucked away in the ingredients list but embedded in the price. The campaign, live during the brand's flagship relaunch, makes The Ordinary's positioning crystal clear: no inflated costs, no gimmicks, just science-backed skincare. "What the industry is trying to keep under wraps is the complete opposite of transparency," says Joe Sare, art director at Uncommon. "Brands are paying significant sums of money for celebrities to endorse their products and passing those costs onto the consumer. So, we decided to reveal this 'secret ingredient' to the world, to reinforce the brand's commitment to being truly open." Far from a traditional ad campaign, the team leaned into something more visceral that people could touch, feel, and share. At the heart of the installation is a cold, almost sterile stack of imitation cash, deliberately stripped of the polish we might associate with retail window dressing. "A huge pile of money feels quite cold, and at first glance, it could almost look like a pile of rubbish," Joe explains. "Then, your eyes are drawn to the words on the window that explain what you're seeing, and the penny drops – no pun intended." The display's impact lies in its tension: blending stark visual simplicity with an idea that demands a second thought. Uncommon consciously leaned into that duality. "While the installation was physical, we went into the process with social virality in mind," says Joe. "We did play around with other, more intricate ways of showing the money – it flying around in a box, shapes other than the pile, making it interactive – but we landed on this execution as the most honest and impactful way of telling the story." On the glass, bold copy spells out that one of the most expensive ingredients in many beauty products is influence. Alongside the pile are tongue-in-cheek "price tags" assigning monetary values to fictional endorsements—the going rate, perhaps, for a 'celebrity serum' or moisturiser marketed by your favourite A-lister. Rather than preaching or pointing fingers, the tone is playful and inclusive. "We're ultimately on the audiences' side," says Marco Del Valle, Planning Director at Uncommon. "This isn't about judging them for buying other brands, but revealing something to them that they likely were not aware of." For a brand like The Ordinary – whose identity is rooted in radical transparency – the message isn't an opportunistic call-out but a reflection of its founding values. "The Ordinary is not anti-celebrity," Marco continues, "but it is against using unnecessary, bolt-on ingredients that ultimately cost customers more… and often the most expensive ingredient is a celebrity endorsement." That clarity of perspective helped shape the creative direction. According to the Uncommon team, the collaboration was genuinely collaborative—not just signed off but co-authored. "They're a dream to work alongside," says Joe. The entire team has such a strong sense of what the brand stands for and a deep passion for bringing that to life. We were on the journey together at every stage… from the creative to the messaging. It's a true partnership." The work also taps into a broader shift in what audiences want and expect from brands, especially in beauty and wellness, where the mood is turning from aspiration to honesty. "We've always set out to build the brands people wish existed," says Marco. "And for us, a big part of this is doing work that uncovers and/or addresses real cultural tensions. Every single category has multiple tensions and untold stories within it – the world of beauty is no exception." What's striking is how the piece walks the line between creative expression and brand activism without slipping into moralising. Instead of issuing a lecture, it sparks a conversation, both on the high street and online, where its simplicity proved especially shareable. According to Joe, most passers-by stopped to take pictures, with social sentiment "overwhelmingly positive." For Uncommon, it marks another step in its evolution as a studio that blends brand storytelling with cultural critique. As Marco puts it: "We are living in an increasingly fragmented, hyper-visual reality. Social media has shortened our attention spans and increased the need for brands to create thumb-stopping content. "On top of this, in the current socio-economic climate, consumers are becoming more discerning about the companies they choose to engage with. They're looking for brands with depth, brands that stand for something." In that sense, The Cost of Influence is a provocation, holding a mirror up to an industry and nudging us to question what we're buying into. #ordinary #uncommon #lift #lid #beauty
    WWW.CREATIVEBOOM.COM
    The Ordinary and Uncommon lift the lid on beauty endorsements
    On the surface, the towering pile of fake banknotes stacked in the window of a glistening skincare store could be interpreted as a marketing stunt, but behind the glass, the message is serious. The Cost of Influence – a new physical installation created by Uncommon for The Ordinary – was designed to highlight the hidden fees consumers pay for celebrity endorsements in the beauty industry, not tucked away in the ingredients list but embedded in the price. The campaign, live during the brand's flagship relaunch, makes The Ordinary's positioning crystal clear: no inflated costs, no gimmicks, just science-backed skincare. "What the industry is trying to keep under wraps is the complete opposite of transparency," says Joe Sare, art director at Uncommon. "Brands are paying significant sums of money for celebrities to endorse their products and passing those costs onto the consumer. So, we decided to reveal this 'secret ingredient' to the world, to reinforce the brand's commitment to being truly open." Far from a traditional ad campaign, the team leaned into something more visceral that people could touch, feel, and share. At the heart of the installation is a cold, almost sterile stack of imitation cash, deliberately stripped of the polish we might associate with retail window dressing. "A huge pile of money feels quite cold, and at first glance, it could almost look like a pile of rubbish," Joe explains. "Then, your eyes are drawn to the words on the window that explain what you're seeing, and the penny drops – no pun intended." The display's impact lies in its tension: blending stark visual simplicity with an idea that demands a second thought. Uncommon consciously leaned into that duality. "While the installation was physical, we went into the process with social virality in mind," says Joe. "We did play around with other, more intricate ways of showing the money – it flying around in a box, shapes other than the pile, making it interactive – but we landed on this execution as the most honest and impactful way of telling the story." On the glass, bold copy spells out that one of the most expensive ingredients in many beauty products is influence. Alongside the pile are tongue-in-cheek "price tags" assigning monetary values to fictional endorsements—the going rate, perhaps, for a 'celebrity serum' or moisturiser marketed by your favourite A-lister. Rather than preaching or pointing fingers, the tone is playful and inclusive. "We're ultimately on the audiences' side," says Marco Del Valle, Planning Director at Uncommon. "This isn't about judging them for buying other brands, but revealing something to them that they likely were not aware of." For a brand like The Ordinary – whose identity is rooted in radical transparency – the message isn't an opportunistic call-out but a reflection of its founding values. "The Ordinary is not anti-celebrity," Marco continues, "but it is against using unnecessary, bolt-on ingredients that ultimately cost customers more… and often the most expensive ingredient is a celebrity endorsement." That clarity of perspective helped shape the creative direction. According to the Uncommon team, the collaboration was genuinely collaborative—not just signed off but co-authored. "They're a dream to work alongside," says Joe. The entire team has such a strong sense of what the brand stands for and a deep passion for bringing that to life. We were on the journey together at every stage… from the creative to the messaging. It's a true partnership." The work also taps into a broader shift in what audiences want and expect from brands, especially in beauty and wellness, where the mood is turning from aspiration to honesty. "We've always set out to build the brands people wish existed," says Marco. "And for us, a big part of this is doing work that uncovers and/or addresses real cultural tensions. Every single category has multiple tensions and untold stories within it – the world of beauty is no exception." What's striking is how the piece walks the line between creative expression and brand activism without slipping into moralising. Instead of issuing a lecture, it sparks a conversation, both on the high street and online, where its simplicity proved especially shareable. According to Joe, most passers-by stopped to take pictures, with social sentiment "overwhelmingly positive." For Uncommon, it marks another step in its evolution as a studio that blends brand storytelling with cultural critique. As Marco puts it: "We are living in an increasingly fragmented, hyper-visual reality. Social media has shortened our attention spans and increased the need for brands to create thumb-stopping content. "On top of this, in the current socio-economic climate, consumers are becoming more discerning about the companies they choose to engage with. They're looking for brands with depth, brands that stand for something." In that sense, The Cost of Influence is a provocation, holding a mirror up to an industry and nudging us to question what we're buying into.
    1 Комментарии 0 Поделились
  • Paramount Could Violate Anti-Bribery Law If It Pays to Settle Trump’s ‘60 Minutes’ Lawsuit, Senators Claim

    Three prominent U.S. senators warned Paramount Global and controlling shareholder Shari Redstone that they might be breaking a federal anti-bribery law if they agree to settle President Trump’s lawsuit against CBS over a “60 Minutes” segment.

    In a letter addressed to Redstone that was posted publicly, Sens. Elizabeth Warren, Bernie Sandersand Ron Wydencited reports that Paramount has been in settlement talks with Trump’s lawyers in the case. The Trump suit, which seeks at least billion in damages, alleges CBS’s “60 Minutes” deceptively edited an interview with Kamala Harris and thereby violated a Texas consumer protection law. Paramount and CBS have argued that they did nothing wrong; in a motion to dismiss Trump’s suit Paramount called the legal action “an affront to the First Amendment” that is “without basis in law or fact.” CBS News has maintained that the “60 Minutes” broadcast and promotion of the Harris interview was “not doctored or deceitful.”

    Related Stories

    Now, the senators wrote in the letter dated May 19, “Paramount appears to be walking back its commitments to defend CBS’s First Amendment rights.” They said they were writing “to express serious concern regarding the possibility that media company Paramount Globalmay be engaging in improper conduct involving the Trump Administration in exchange for approval of its megamerger with Skydance Media” — and the senators suggested any monetary settlement in the case could be illegal.

    Popular on Variety

    “Under the federal bribery statute, it is illegal to corruptly give anything of value to public officials to influence an official act,” the senators wrote. “If Paramount officials make these concessions in a quid pro quo arrangement to influence President Trump or other Administration officials, they may be breaking the law.”

    A copy of the letter is at this link. Warren and Sanders were among nine senators who urged Redstone in a May 6 open letter to not settle the lawsuit, calling it “an attack on the United States Constitution and the First Amendment.”

    A spokesperson for Paramount declined to comment but referred to the company’s previous statement saying: “This lawsuit is completely separate from, and unrelated to, the Skydance transaction and the FCC approval process. We will abide by the legal process to defend our case.” A rep for Redstone declined to comment. The White House did not respond to a request for comment.

    SEE ALSO: Shari Redstone’s Impossible Choice: She Can’t Both ‘60 Minutes’ and Paramount Global

    The billion Paramount-Skydance deal is currently pending FCC approval. Earlier this month, Trump-appointed FCC chairman Brendan Carr said the approval of Paramount-Skydance is not connected to the president’s “60 Minutes” lawsuit. Last November, he said in a Fox News interview that a conservative group’s “news distortion” complaint against CBS over the “60 Minutes” Harris interview was “likely to arise in the context of the FCC review oftransaction.” One issue Paramount and the FCC reportedly are in discussions about: securing a commitment from Paramount and Skydance to eliminate diversity, equity and inclusion programs, as part of the Trump administration’s attack on DEI. In February, Paramount said it was changing some of its DEI programs to comply with the Trump administration’s directives. But Carr may be seeking a more ironclad guarantee. The FCC last week approved Verizon’s billion deal to acquire Frontier Communications after Verizon pledged to eradicate DEI initiatives.

    On Monday, CBS News president Wendy McMahon announced her resignation, writing in a memo to staff “It’s become clear that the company and I do not agree on the path forward.” That came less than a month after “60 Minutes” executive producer Bill Owens quit, also citing conflicts with Paramount execs. Warren, Sanders and Wyden drew a connection between the exits of McMahon and Owens and the Trump lawsuit: “Paramount’s scheme to curry favor with the Trump Administration has compromised journalistic independence and raises serious concerns of corruption and improper conduct,” they wrote.

    In the letter to Redstone, the senators requested answers to specific questions regarding the situation by June 2, including “Does Paramount believe the lawsuit filed by then-candidate Trump against CBS has merit?”, “Has Paramount evaluated the risk of shareholder derivative litigation from settling the lawsuit?”; and “Has 60 Minutes made changes to its content at the request of anyone at Paramount to facilitate approval of the merger?”

    The three senators also asked pointedly: “Does Paramount have any policies and procedures related to compliance with 18 U.S.C. 201 and any other laws governing public corruption? If so, please provide a copy of those policies and procedures.”

    In February, Redstone asked Paramount’s board to resolve the Trump lawsuit, including by exploring the possibility of mediation, Variety has reported. Redstone has recused herself from the board’s discussions about a settlement with Trump. 

    Trump, on his Truth Social social media account last month, said his lawsuit against CBS was “a true WINNER” and falsely claimed that Paramount, CBS and “60 Minutes” admitted to committing “this crime” of deceptively editing Harris’ answer. Trump alleged “60 Minutes” edited the interview to eliminate her “bad and incompetent” response to a question about whether Israel Prime Minister Benjamin Netanyahu is “listening to the Biden-Harris administration.” Trump asserted the version of the “60 Minutes” interview that aired “cheated and defrauded the American People at levels never seen before in the Political Arena.”

    The senators’ letter to Redstone was first reported by the Wall Street Journal.
    #paramount #could #violate #antibribery #law
    Paramount Could Violate Anti-Bribery Law If It Pays to Settle Trump’s ‘60 Minutes’ Lawsuit, Senators Claim
    Three prominent U.S. senators warned Paramount Global and controlling shareholder Shari Redstone that they might be breaking a federal anti-bribery law if they agree to settle President Trump’s lawsuit against CBS over a “60 Minutes” segment. In a letter addressed to Redstone that was posted publicly, Sens. Elizabeth Warren, Bernie Sandersand Ron Wydencited reports that Paramount has been in settlement talks with Trump’s lawyers in the case. The Trump suit, which seeks at least billion in damages, alleges CBS’s “60 Minutes” deceptively edited an interview with Kamala Harris and thereby violated a Texas consumer protection law. Paramount and CBS have argued that they did nothing wrong; in a motion to dismiss Trump’s suit Paramount called the legal action “an affront to the First Amendment” that is “without basis in law or fact.” CBS News has maintained that the “60 Minutes” broadcast and promotion of the Harris interview was “not doctored or deceitful.” Related Stories Now, the senators wrote in the letter dated May 19, “Paramount appears to be walking back its commitments to defend CBS’s First Amendment rights.” They said they were writing “to express serious concern regarding the possibility that media company Paramount Globalmay be engaging in improper conduct involving the Trump Administration in exchange for approval of its megamerger with Skydance Media” — and the senators suggested any monetary settlement in the case could be illegal. Popular on Variety “Under the federal bribery statute, it is illegal to corruptly give anything of value to public officials to influence an official act,” the senators wrote. “If Paramount officials make these concessions in a quid pro quo arrangement to influence President Trump or other Administration officials, they may be breaking the law.” A copy of the letter is at this link. Warren and Sanders were among nine senators who urged Redstone in a May 6 open letter to not settle the lawsuit, calling it “an attack on the United States Constitution and the First Amendment.” A spokesperson for Paramount declined to comment but referred to the company’s previous statement saying: “This lawsuit is completely separate from, and unrelated to, the Skydance transaction and the FCC approval process. We will abide by the legal process to defend our case.” A rep for Redstone declined to comment. The White House did not respond to a request for comment. SEE ALSO: Shari Redstone’s Impossible Choice: She Can’t Both ‘60 Minutes’ and Paramount Global The billion Paramount-Skydance deal is currently pending FCC approval. Earlier this month, Trump-appointed FCC chairman Brendan Carr said the approval of Paramount-Skydance is not connected to the president’s “60 Minutes” lawsuit. Last November, he said in a Fox News interview that a conservative group’s “news distortion” complaint against CBS over the “60 Minutes” Harris interview was “likely to arise in the context of the FCC review oftransaction.” One issue Paramount and the FCC reportedly are in discussions about: securing a commitment from Paramount and Skydance to eliminate diversity, equity and inclusion programs, as part of the Trump administration’s attack on DEI. In February, Paramount said it was changing some of its DEI programs to comply with the Trump administration’s directives. But Carr may be seeking a more ironclad guarantee. The FCC last week approved Verizon’s billion deal to acquire Frontier Communications after Verizon pledged to eradicate DEI initiatives. On Monday, CBS News president Wendy McMahon announced her resignation, writing in a memo to staff “It’s become clear that the company and I do not agree on the path forward.” That came less than a month after “60 Minutes” executive producer Bill Owens quit, also citing conflicts with Paramount execs. Warren, Sanders and Wyden drew a connection between the exits of McMahon and Owens and the Trump lawsuit: “Paramount’s scheme to curry favor with the Trump Administration has compromised journalistic independence and raises serious concerns of corruption and improper conduct,” they wrote. In the letter to Redstone, the senators requested answers to specific questions regarding the situation by June 2, including “Does Paramount believe the lawsuit filed by then-candidate Trump against CBS has merit?”, “Has Paramount evaluated the risk of shareholder derivative litigation from settling the lawsuit?”; and “Has 60 Minutes made changes to its content at the request of anyone at Paramount to facilitate approval of the merger?” The three senators also asked pointedly: “Does Paramount have any policies and procedures related to compliance with 18 U.S.C. 201 and any other laws governing public corruption? If so, please provide a copy of those policies and procedures.” In February, Redstone asked Paramount’s board to resolve the Trump lawsuit, including by exploring the possibility of mediation, Variety has reported. Redstone has recused herself from the board’s discussions about a settlement with Trump.  Trump, on his Truth Social social media account last month, said his lawsuit against CBS was “a true WINNER” and falsely claimed that Paramount, CBS and “60 Minutes” admitted to committing “this crime” of deceptively editing Harris’ answer. Trump alleged “60 Minutes” edited the interview to eliminate her “bad and incompetent” response to a question about whether Israel Prime Minister Benjamin Netanyahu is “listening to the Biden-Harris administration.” Trump asserted the version of the “60 Minutes” interview that aired “cheated and defrauded the American People at levels never seen before in the Political Arena.” The senators’ letter to Redstone was first reported by the Wall Street Journal. #paramount #could #violate #antibribery #law
    VARIETY.COM
    Paramount Could Violate Anti-Bribery Law If It Pays to Settle Trump’s ‘60 Minutes’ Lawsuit, Senators Claim
    Three prominent U.S. senators warned Paramount Global and controlling shareholder Shari Redstone that they might be breaking a federal anti-bribery law if they agree to settle President Trump’s lawsuit against CBS over a “60 Minutes” segment. In a letter addressed to Redstone that was posted publicly, Sens. Elizabeth Warren (D-Mass.), Bernie Sanders (I-Vt.) and Ron Wyden (D-Ore.) cited reports that Paramount has been in settlement talks with Trump’s lawyers in the case. The Trump suit, which seeks at least $20 billion in damages, alleges CBS’s “60 Minutes” deceptively edited an interview with Kamala Harris and thereby violated a Texas consumer protection law. Paramount and CBS have argued that they did nothing wrong; in a motion to dismiss Trump’s suit Paramount called the legal action “an affront to the First Amendment” that is “without basis in law or fact.” CBS News has maintained that the “60 Minutes” broadcast and promotion of the Harris interview was “not doctored or deceitful.” Related Stories Now, the senators wrote in the letter dated May 19, “Paramount appears to be walking back its commitments to defend CBS’s First Amendment rights.” They said they were writing “to express serious concern regarding the possibility that media company Paramount Global (Paramount) may be engaging in improper conduct involving the Trump Administration in exchange for approval of its megamerger with Skydance Media” — and the senators suggested any monetary settlement in the case could be illegal. Popular on Variety “Under the federal bribery statute, it is illegal to corruptly give anything of value to public officials to influence an official act,” the senators wrote. “If Paramount officials make these concessions in a quid pro quo arrangement to influence President Trump or other Administration officials, they may be breaking the law.” A copy of the letter is at this link. Warren and Sanders were among nine senators who urged Redstone in a May 6 open letter to not settle the lawsuit, calling it “an attack on the United States Constitution and the First Amendment.” A spokesperson for Paramount declined to comment but referred to the company’s previous statement saying: “This lawsuit is completely separate from, and unrelated to, the Skydance transaction and the FCC approval process. We will abide by the legal process to defend our case.” A rep for Redstone declined to comment. The White House did not respond to a request for comment. SEE ALSO: Shari Redstone’s Impossible Choice: She Can’t Save Both ‘60 Minutes’ and Paramount Global The $8 billion Paramount-Skydance deal is currently pending FCC approval. Earlier this month, Trump-appointed FCC chairman Brendan Carr said the approval of Paramount-Skydance is not connected to the president’s “60 Minutes” lawsuit. Last November, he said in a Fox News interview that a conservative group’s “news distortion” complaint against CBS over the “60 Minutes” Harris interview was “likely to arise in the context of the FCC review of [the Paramount-Skydance] transaction.” One issue Paramount and the FCC reportedly are in discussions about: securing a commitment from Paramount and Skydance to eliminate diversity, equity and inclusion programs, as part of the Trump administration’s attack on DEI. In February, Paramount said it was changing some of its DEI programs to comply with the Trump administration’s directives. But Carr may be seeking a more ironclad guarantee. The FCC last week approved Verizon’s $20 billion deal to acquire Frontier Communications after Verizon pledged to eradicate DEI initiatives. On Monday, CBS News president Wendy McMahon announced her resignation, writing in a memo to staff “It’s become clear that the company and I do not agree on the path forward.” That came less than a month after “60 Minutes” executive producer Bill Owens quit, also citing conflicts with Paramount execs. Warren, Sanders and Wyden drew a connection between the exits of McMahon and Owens and the Trump lawsuit: “Paramount’s scheme to curry favor with the Trump Administration has compromised journalistic independence and raises serious concerns of corruption and improper conduct,” they wrote. In the letter to Redstone, the senators requested answers to specific questions regarding the situation by June 2, including “Does Paramount believe the lawsuit filed by then-candidate Trump against CBS has merit?”, “Has Paramount evaluated the risk of shareholder derivative litigation from settling the lawsuit?”; and “Has 60 Minutes made changes to its content at the request of anyone at Paramount to facilitate approval of the merger?” The three senators also asked pointedly: “Does Paramount have any policies and procedures related to compliance with 18 U.S.C. 201 and any other laws governing public corruption? If so, please provide a copy of those policies and procedures.” In February, Redstone asked Paramount’s board to resolve the Trump lawsuit, including by exploring the possibility of mediation, Variety has reported. Redstone has recused herself from the board’s discussions about a settlement with Trump.  Trump, on his Truth Social social media account last month, said his lawsuit against CBS was “a true WINNER” and falsely claimed that Paramount, CBS and “60 Minutes” admitted to committing “this crime” of deceptively editing Harris’ answer. Trump alleged “60 Minutes” edited the interview to eliminate her “bad and incompetent” response to a question about whether Israel Prime Minister Benjamin Netanyahu is “listening to the Biden-Harris administration.” Trump asserted the version of the “60 Minutes” interview that aired “cheated and defrauded the American People at levels never seen before in the Political Arena.” The senators’ letter to Redstone was first reported by the Wall Street Journal.
    0 Комментарии 0 Поделились
  • The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)

    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak.
    With the launch of NVIDIA’s GeForce RTX 5060, that could all change.
    According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’. 
    Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves.

    NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’

    This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’
    NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings
    So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to:

    Limit the games allowed for benchmarking
    Only permit the RTX 5060 to be compared to specific other graphics cards, and 
    Specifying individual graphics settings

    Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs.
    According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060. 
    Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.”
    Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’ 

    Credit: HardwareLuxx
    First Party “Tests” Can’t Always Be Trusted
    Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation.
    Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade.
    NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves.
    Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage
    Consumers Deserve Comprehensive Reviews and Competitor Comparisons
    Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent.
    Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck.
    NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored? 
    Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure
    In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4Xperformance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply.
    Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers.

    According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’

    Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’
    The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered?
    Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place.
    Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others.
    FTC and Google Would Disagree with Nvidia’s Review Restrictions
    Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this.
    Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers.
    It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products. 
    Consumers May Need to Wait For Trusted, Independent Reviews
    This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture.
    This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch. 
    Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive. 
    There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future.
    This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications.
    Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk.

    Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction.
    She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends.
    She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes areas interesting and engaging to read as they are to write.
    When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough!

    View all articles by Paula Beaton

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #preview #paradox #how #early #rtx
    The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)
    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak. With the launch of NVIDIA’s GeForce RTX 5060, that could all change. According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’.  Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves. NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’ This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’ NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to: Limit the games allowed for benchmarking Only permit the RTX 5060 to be compared to specific other graphics cards, and  Specifying individual graphics settings Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs. According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060.  Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.” Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’  Credit: HardwareLuxx First Party “Tests” Can’t Always Be Trusted Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation. Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade. NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves. Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage Consumers Deserve Comprehensive Reviews and Competitor Comparisons Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent. Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck. NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored?  Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4Xperformance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply. Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers. According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’ Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’ The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered? Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place. Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others. FTC and Google Would Disagree with Nvidia’s Review Restrictions Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this. Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers. It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products.  Consumers May Need to Wait For Trusted, Independent Reviews This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture. This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch.  Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive.  There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future. This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications. Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk. Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction. She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends. She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes areas interesting and engaging to read as they are to write. When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough! View all articles by Paula Beaton Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #preview #paradox #how #early #rtx
    TECHREPORT.COM
    The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)
    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak. With the launch of NVIDIA’s GeForce RTX 5060, that could all change. According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’.  Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves. NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’ This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’ NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to: Limit the games allowed for benchmarking Only permit the RTX 5060 to be compared to specific other graphics cards, and  Specifying individual graphics settings Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs. According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060.  Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.” Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’  Credit: HardwareLuxx First Party “Tests” Can’t Always Be Trusted Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation. Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade. NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves. Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage Consumers Deserve Comprehensive Reviews and Competitor Comparisons Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent. Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck. NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored?  Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4X (MFG4X) performance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply. Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers. According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’ Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’ The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered? Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place. Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others. FTC and Google Would Disagree with Nvidia’s Review Restrictions Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this. Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers. It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products.  Consumers May Need to Wait For Trusted, Independent Reviews This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture. This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch.  Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive.  There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future. This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications. Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk. Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction. She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends. She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes are (hopefully) as interesting and engaging to read as they are to write. When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough! View all articles by Paula Beaton Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Комментарии 0 Поделились
  • Email Deliverability: How to Dodge the Spam Folder

    Reading Time: 17 minutes
    Email marketing isn’t just about sending emails. Your emails could have the most compelling subject line, the perfect design, and personalized offers, and still land in your customers’ spam folders. You may call it ‘heartbreak.’ We call it an email deliverability issue.
    Email deliverability ensures your messages reach your audience where they’ll be read, not in the clutches of spam filters or ignored ‘Promotions’ tabs.
    Thankfully, this guide is here to arm you with actionable tactics to improve email marketing deliverability.
    But wait! Before diving headfirst into tactics and tools, let’s take a step back and address the basics of what exactly deliverability is, and how it’s different from email delivery.

     
    What is Email Deliverability?
    Email deliverability is the ability of your emails to successfully land in your customers’ primary inboxes, rather than getting routed to spam folders or getting rejected. It evaluates whether your email reaches its destination, and if it’s considered trustworthy by mailbox providers like Gmail, Yahoo, or Outlook.
    Email Delivery vs. Email Deliverability: What’s the Difference?
    On the surface, email deliverability and email delivery may sound like interchangeable terms, but they’re far from identical. Let’s make it crystal clear:

    Email delivery means your email has been successfully accepted by your recipient’s email server. In other words, it’s like your email knocking on their door and being allowed into the house.
    Email deliverability, on the other hand, is what happens after that door is opened. Does your email land in the inbox? Or does it get tossed into the spam folder?

    Here’s an example to tie it all together: Imagine sending out 1,000 emails. If 950 of them successfully reach the targeted servers, you have an email delivery rate of 95%. But if only 750 of those emails end up in the inbox, your email deliverability rate is 75%. See the difference now?
    “But what does it matter anyway?” you ask.
    Why is Email Deliverability Important?
    Email deliverability is the backbone of every successful email marketing campaign. If your emails don’t land in recipients’ inboxes, your efforts won’t drive results.
    A strong deliverability strategy ensures your messages reach the right inboxes, increasing engagement and driving action. It also protects your sender reputation by monitoring key metrics like your sender score and IP reputation. Tools like MxToolbox and Talos can help you proactively check your domain and IP health.
    Ultimately, high email deliverability safeguards your reputation and maximizes the impact of your email marketing efforts.
     
    What Affects Email Deliverability?
    Below, we’ll discuss the most important factors that can affect your email deliverability. Buckle up, marketers. This is about to get real.
    1. Domain and IP Reputation
    Mailbox providersuse your domain or sender reputation to assess how trustworthy you are. Accordingly, they decide which folder to send your email to. If they see evidence that you send sketchy or unwanted emails, your reputation tanks faster than a celebrity caught in a scandal. And so does your ability to hit inboxes.
    2. Number of Emails Sent
    More isn’t always better. Unless we’re talking about pizza or vacation days, of course.
    When it comes to email volumes, though, the quantity you’re sending can directly impact your email deliverability.
    If you send out too many emails too soon, mailbox providers might consider this overly aggressive behavior. They may slow down your email deliveryor flag you as spammy.
    3. Email Content
    Does your email content look like it was written by a used car salesman from ‘95? You’re headed for trouble.
    Here’s the deal: Email providers analyze the content of every email to determine if it’s spammy garbage or something worth reaching an inbox.
    Things like excessive exclamation points, ALL CAPS, misleading subject lines, or overusing words like “FREE!!!” sound alarm bells. Oh, and including massive filesor embedding shady links? That’s like grinning with spinach stuck in your teeth.
    How does email file size affect deliverability, though? Well, it typically takes longer to download and see heavy emails, and that can impact email engagement. Mailbox providers like Gmail also clip messages weighing over 102 KB. The moral of the story is, your email file size should be less than 100 KB. No questions asked.
    4. Email Sending Infrastructure
    Welcome to the tech part of email deliverability. Your email sending infrastructure includes things like:

    Proper domain authentication: Sender Policy Framework, DomainKeys Identified Mail, and Domain-based Message Authentication Reporting & Conformance. These security protocols prove to mailbox providers that you’re not a shady imposter trying to phish their customers.
    Dedicated IP addresses: These help establish consistency over time, especially for higher email volumes.
    Reliable Email Service Providers: Choose platforms that prioritize deliverability. In short, they should authenticate your emails, use reputable IPs, and comply with CAN-SPAM, GDPR, and other important regulations.

    5. Performance Metrics and User Engagement
    Here’s where you have to face cold, hard facts: your audience’s reaction to your emails matters more than your own opinion of your email campaigns.

    Metrics like open rates, click-through rates, and spam complaint rates aren’t just vanity numbers. If mailbox providers notice that customers are ignoring your emails, deleting them without reading, or marking them as ‘spam’, they’ll quietly penalize you by deprioritizing all future emails you send.
    Translation: Lower engagement = Bad email deliverability.
    Speaking of which, it’s time to see what makes deliverability good or bad.
     
    How Do You Measure Email Deliverability?
    Measuring email marketing deliverability is about understanding not only how many emails you’re sending, but also how many actually reach the intended inboxes and how well they perform when they do.
    To measure email deliverability, you’ll need a sharp eye for identifying patterns in your campaigns. Let’s dive into the most important aspects that impact how you measure and interpret deliverability.
    What is a Good Email Deliverability Rate?
    Okay, let’s address the elephant in the inbox: how good is good?
    In general terms, a “good” email deliverability rate ranges between 95% and 98%. If you’re hitting these numbers, pat yourself on the back—you’re doing far better than average. Anything below 90%, however, is definitely cause for concern, as it signals red flags in your sender reputation, email copy, or email list management tactics.
    That said, keep in mind that these numbers can vary depending on your industry, audience, and email strategy. For instance, Ecommerce brands targeting global audiences often deal with larger volumes and higher bounces. On the other hand, smaller B2C SaaS brands might maintain a higher deliverability baseline due to more niche, curated lists.
    What is an Email Deliverability Score and How Do You Use It?
    An email deliverability score is a score to assess your overall email performance based on factors like bounce rates, sender reputation, customer engagement, and list hygiene.
    Essentially, mailbox providers like Gmail and Outlook assign trust levels to sender domains and IP addresses, and the deliverability score helps you understand how hightheir trust in you is.
    Use this score strategically to:

    Audit the effectiveness of your latest email campaigns.
    Track long-term trends in reputation and performance.
    Adjust tactics, such as improving sender authenticity or list hygiene.

    Pro tip: Most email marketing platforms can provide you with a deliverability score right inside your dashboard.
    Sure, the score gives you a bird’s-eye view of the deliverability. But you need to dig deeper to get precise insights into what’s working and not working for your emails.
     
    Top 5 Email Deliverability Metrics to Understand Campaign Performance
    Here are the five email deliverability metrics every savvy B2C marketer needs to analyze.

    Now let’s look at them in detail, shall we?
    1. Open Rate

    The open rate measures the percentage of recipients who open your email after it lands in their inbox. It’s the first metric that indicates whether your subject line, brand, or sending name resonates enough with your audience to make them open your emails.
    Low open rates might mean your emails aren’t even making it to inboxesor that your subject lines need to get better at sparking curiosity.
    2. Clickthrough RateThe CTR is the percentage of recipients who clicked on a link inside your email after opening it. This metric is a direct indicator of how well your email content engages the audience. Did they find value in your email, or did you lose their attention somewhere within your copy?
    3. Spam Complaint Rate

    The spam complaint rate measures the percentage of recipients who mark your email as ‘spam’. This email marketing metric matters because mailbox providers take complaints very seriously, which might tank your sender reputation.
    4. Unsubscribe Rate

    Unsubscribe rate tracks how many recipients opted out of receiving your emails.
    While it’s natural to see a few unsubscribes here and there, a sudden spike signals a disconnect between your email strategy and audience expectations.
    5. Deliverability Rate

    The deliverability rate measures how many emails actually made it to the recipient’s inbox, compared to how many you initially sent.
    Recipients unsubscribing is essentially them telling you, “Thanks, but no thanks.” It’s a sign of misaligned targeting or irrelevant content and can provide insight into how to refine your audience segmentation.
     
    3 Critical Email Deliverability Issues and Problems to Avoid
    Even the best marketers face roadblocks on the highway to the customer’s inbox. Here are three email deliverability issues to sidestep early:
    1. How to Avoid Spam Filters in Email Marketing
    Spam filters are filters that catch anything they deem suspicious or irrelevant and banish it to the spam folder. For B2C marketers like you, this spells disaster.
    Emails flagged as spam don’t just fail to reach your audience; they damage your sender reputation and affect future deliverability. The good news? With the right practices, you can dodge spam filters and ensure your emails land in the inbox.
    “Pull Rather Than Push”
    The best way to avoid spam filters is to focus on engagement. Instead of spamming your users with hollow promotional material, entice them with rich, relevant, and consumable content that provides unmistakable value. Whether it’s exclusive offers or personalized updates, make emails worth opening.
    Maintain a Clean Email List
    Emailing outdated, invalid, or irrelevant addresses is your one-way ticket to spam territory. Worse, you might email a spam trap. It’s an intentionally invalid address designed to catch domains with poor email practices.
    Spam filters look for patterns, and a high volume of bounces or spam reports can slam your domain reputation, decreasing overall deliverability.
    One way of recognizing the issue is when your email open rates are drastically low.
    Create Content in the Right Balance and Tone
    Spam filters are deeply sensitive to the content of your email—everything from subject lines to text color can trigger red flags. To pass their strict scrutiny, your content must be clean, relevant, and balanced.
    What to Avoid

    Words like “FREE!!!” or “Make Millions Now” trigger alarms. Avoid excessive punctuation, caps lock, or promotional buzzwords.
    Keep content short, relevant, and readable. Long-winded emails frustrate customers and scream “spam” to filters.

    What to Focus On

    Add recipients’ names and offer content tailored to their activity or preferences.
    Optimize your emails for mobile. Most customers check emails on their phones, so ensure proper rendering on smaller screens.
    Include alt text for images and a plain-text version of every email for better compatibility.

    Build a Relationship with Subscribers
    Spam filters penalize cold, irrelevant communication, but fostering a respectful relationship with subscribers can work in your favor.
    Don’t ignore unsubscribe requests. Rather, respect their choice to opt out. Bombarding unsubscribers breeds resentment and spam complaints.
    Encourage subscribers to add your domain to their address book. This keeps your emails prioritized and inbox-friendly.
    Tailor communication around subscriber needs rather than what your company wants to push. Strong customer relationships not only improve email deliverability, but also create brand loyalty. That’s where customer relationship emails come into the picture.
    2. How to Maintain a Healthy Email Domain Reputation
    Your email domain reputation is a key factor that mailbox providers use to decide whether your emails land in the inbox, spam folder, or get blocked altogether. A poor reputation, caused by spam complaints, bounces, or sending to spam traps, can destroy your deliverability and damage your brand’s credibility.
    To maintain your reputation, focus on consistent sending patterns, clean email lists, and authenticated domains using SPF, DKIM, and DMARC. Avoid sudden spikes in email volume, and ensure your content is relevant and personalized to improve engagement.
    3. Avoid Email Spam Traps
    Spam traps or honeypots are invalid email addresses created by mailbox providers to identify and penalize senders with poor email list hygiene. These addresses don’t engage with emails and can enter contact lists through outdated addresses, typos, or unethical practices like scraping or purchasing lists.
    Hitting spam traps damages your sender reputation, affects email deliverability, and can result in emails landing in spam folders or being blocked altogether.
    Types of Email Spam Traps
    There are three main types of spam traps:

    Pristine Spam Traps: These are never-used addresses created to catch senders who purchase lists or scrape public forums.
    Recycled Traps: These are once-valid addresses repurposed into traps after prolonged inactivity and failure to suppress them.
    Typo Traps: These are email addresses containing user-input errors, like “gail.com” instead of “gmail.com.”

    Each of these indicates lapses in list hygiene or collection processes and contributes to deliverability issues.
    How to Identify Spam Traps
    Spam traps often point to declining email deliverability rates because these addresses never engage with emails or click through content. They are typically outdated, invalid, or associated with suspicious email collection methods.
    If you notice drops in deliverability metrics like increasing bounces or disengagement, tools like SpamCop can help detect these problematic addresses.
    How to Avoid Spam Traps
    Precision list management is key to minimizing spam traps. Double opt-ins verify addresses and ensure subscribers consent to communication.
    Send confirmation emails upon sign-up to validate authenticity, and consistently remove inactive subscribers to prevent recycled traps. Buying contact lists is a strict no-no, as these are a magnet for spam traps.
    If issues arise, validate your list using professional tools and re-permission campaigns to re-engage or suppress inactive contacts. A clean, permission-based list not only avoids spam traps, but also enhances engagement and deliverability.
     
    How to Improve Email Deliverability: 9 Best Practices to Follow
    Improving deliverability requires discipline, tactics, and the right email deliverability service. Follow these eight tried-and-tested deliverability best practices to ensure your emails land in customers’ inboxes, and not the dreaded spam folder.
    1. Confirm Sender Authentication
    Before anything else, authenticate your domain with SPF, DKIM, and DMARC. Without them, email providers might assume your campaigns are malicious and block them altogether.
    Work with your IT team or use platforms like MoEngage to configure these settings efficiently. MoEngage ensures your domain’s authentication is in place, enhancing your odds of bypassing spam filters.
    2. Build a Clean Email List
    As we’ve mentioned before, high-quality email list is a non-negotiable for strong deliverability. Outdated or purchased lists kill your reputation faster than a bad joke at a standup gig. Here are a few pointers to help you clean and grow your email list regularly:

    Use double opt-ins to ensure new subscribers genuinely want your emails.
    Regularly remove bounced, invalid, and unengaged addresses from your list.
    Never buy or scrape email lists. It’s unethical, and in some regions, outright illegal.

    3. Get the Email Copy Right
    Email content dictates how your audience connects with your brand. Poorly written, overly spammy-looking content will ruin your reputation with both readers and ESPs.
    Craft personalized, value-driven copy. Steer clear of all-caps headlines, exclamation overload, or spammy language. A/B test your emails to identify what resonates best with your audience.

    4. Use Fewer Images
    Yes, visuals are powerful, but they can also slow down your email’s load time and raise red flags for spam filters, especially if you add multiple clickable elements.
    Keep your email design balanced with a higher ratio of text to images. A 40:60 text-to-image ratio is considered ideal. Use alt text for each image to ensure accessibility, in case images don’t fully render. Maintain standard HTML email template sizes — it’s 600 pixels for desktops, 320 pixels for vertical, and 480 pixels for horizontal views on mobile devices. Keep your HTML code light and clean without any hidden images or URLs.
    5. Reduce the Number of Links
    Bombarding readers with more links than necessary doesn’t just look spammy, but can also tank deliverability. Spam filters are especially ruthless if your links come from too many domains.
    Stick to one or two actionable, well-placed links per email. Avoid URL shorteners or unverified domains. Don’t forget to preview your email to ensure all hyperlinks are functional and minimal.
    6. Segment Your List
    Generic email blasts, move over. Segmentation allows you to send tailored content based on customer preferences and behavior.
    Use demographic and behavioral data to create focused email segments. MoEngage’s deep Recency, Frequency, and Monetary Valuesegmentation capabilities help you craft hyper-personalized campaigns that boost clickthrough and engagement rates.
    7. Use Real Names and Email Addresses
    Never underestimate the human touch. Blatantly promotional email addresses like ‘promotions@mybusiness.com’ are easily classified as such and are usually placed in the ‘Promotions’ tab.
    Using real people’s names instead, like ‘John Doe’ and ‘jdoe@mystore.com’, suggests the sender might be a human being, and the content might not be promotional.
    8. Comply with Email Deliverability Regulations
    Ignoring compliance laws like the General Data Protection Regulationand CAN-SPAM? Nuh-uh. You don’t want your domain to get blacklisted just because you failed to meet a few legal requirements, do you?
    Always include an opt-out link, honor unsubscribe requests immediately, and only email customers who’ve explicitly given consent to receive your emails.
    9. Set Up Brand Indicators for Message IdentificationBIMI is the new gold star for email branding. It lets you display your brand logo next to your emails, boosting trust and recognition, and improving email deliverability.
    Configure BIMI, which requires a combination of domain authentication and logo verification. Adding this layer increases the likelihood of your emails being opened and decreases spam suspicion.
     
    How to Increase Email Deliverability for Gmail, Yahoo, and Outlook
    Gmail, Yahoo, and Outlook have enforced strict guidelines to keep spam out and ensure better inbox experiences for subscribers. For brands, this means staying compliant or losing precious inbox placement. Let’s break it down.
    First things first: your authentication game needs to be airtight. If you’re sending more than 5,000 emails a day, a DMARC policy is non-negotiable.
    It’s also crucial that the domain in your “From” address is configured with the SPF, DKIM, and DMARC protocols. Additionally, valid forward and reverse DNS/PTR records are required for your email sending domains and IPs. Platforms like MoEngage handle much of this for you during onboarding, but it’s worth double-checking if you’ve added new sending addresses or use your own ESP.

    Then, make unsubscribing effortless. Both Gmail and Yahoo demand clear, visible, one-click unsubscribe links and mandate that opt-out requests be honored within 48 hours.
    With MoEngage, this functionality is automatically included for compliant campaigns. If you’re using your own custom setup, ensure subscription tracking is enabled and working in real time to avoid trouble.
    Finally, you must keep spam rates in check. Yahoo and Gmail expect a spam rate below 0.10%, while hitting 0.30% could spell major email deliverability issues.
    Prevent this by sticking to opt-in subscribers, sending periodic consent confirmation emails, regularly cleaning your lists, and suppressing unengaged subscribers. Maintaining dynamic sending patterns based on customer interests and behavior also helps you adhere to updated compliance standards.
    By aligning your email drip campaigns with these rules, you’ll ensure maximum inbox placement, keep spam filters at bay, and boost your customer engagement.
     
    How to Conduct an Email Deliverability Test
    Running an email deliverability test can help you identify potential issues before they wreak havoc on your campaigns. Whether it’s an email authentication problem, unengaged recipients, orspam-triggering mistakes, running these tests ensures your emails make their way into inboxes smoothly.

    Below, we’ll give you a detailed walkthrough of 5 tips to help you ace your email deliverability testing game.
    1. Use Seed Email Lists
    A seed list is essentially a mini-audience of test email addresses from various providers. Sending your email to this group allows you to measure whether it lands in the inbox, Promotional tab, or the spam folder.
    How does this help? It gives you insights into any immediate deliverability issues tied to specific providers or domains.
    For example, if your test email lands in Yahoo’s spam folder, you can troubleshoot the issuebefore it impacts a broader audience. Fixing these bugs early can boost your email deliverability rate and protect your sender reputation.
    2. Monitor Email Authentication ProtocolsDuring your email deliverability test, check each security protocol to ensure the email is passing authentication checks, so it comes across as legitimate and trustworthy. A properly authenticated email drastically improves your chances of landing in an inbox.
    3. Test Subject Lines and Preheaders
    Your subject line sets the tone and determines whether someone opens your email or sends it to email purgatory.
    When testing email deliverability, keep an eye on how different subject lines perform. Use A/B testing tools to experiment with variations and select one that is catchy, yet non-spammy. Combined with preheader tweaks, this can optimize both inbox placement and engagement metrics.
    4. Check for Blacklist IssuesEmail blacklists are essentially “no-fly lists” for senders known to spam inboxes. Even the best intentions can land you here if you’re not careful. Before you send any email campaign, you should run an email blacklist check using tools like MXToolbox.
    Identifying and resolving blacklist issues early significantly reduces the risk of your email being blocked at the server level. Plus, it helps you maintain your all-important sender reputation.
    5. Evaluate Email Engagement Metrics Early
    Sure, email deliverability testing mostly focuses on getting your email into inboxes, but don’t ignore engagement metrics like open rates and clicks during your test phase. Why? Because customer behavior heavily influences what mailbox providers like Gmail think of your emails. Low engagement during testing could suggest your content or timing needs refinement.
    Test to identify whether certain segments of your subscribersimpact email deliverability performance. Based on these insights, you can remove unengaged subscribers or reconnect them with tailored re-engagement email campaigns later, creating a more engaged and clean list.
     Email Deliverability Checklist: Make Sure You’re Prepared
    Nail your campaigns with these foolproof steps made for B2C marketers to ensure email deliverability.

     
    3 Best Email Deliverability Tools to Ensure Your Emails Reach Your Customers’ Inboxes
    From ensuring domain authentication to tracking reputation scores, there are email software platforms to equip you with the insights and features needed to consistently hit inboxes. Here are the 3 best email deliverability tools to consider for your campaigns.
    1. MoEngage

    *shy giggling* We’re honest. *eyelash batting*
    Seriously, MoEngage is a powerhouse designed specifically for B2C marketers working in industries like Ecommerce, fintech, QSR, and media.
    While it’s much more than just an email deliverability tool, MoEngage shines by offering advanced deliverability monitoring features.
    With real-time analytics, automation tools, and even AI-driven delivery optimization, you’ll know when, where, and why your emails are or aren’t landing in inboxes. It lets you filter for email bot opens and set up double opt-in with advanced configuration options.
    It also offers email deliverability services, including setting up proper authentications, email strategy discussions, assistance with inbox placement/bulking issues, troubleshooting blogs and blacklisting issues, content analysis, and reviewing industry best practices.
    Pricing: MoEngage offers customized pricing based on business size and requirements, with growth plans starting from /month. Its premium plans offer advanced segmentation, AI-powered optimizations, in-depth analytics, and a lot more.
    Best for: Dynamic audience segmentation, AI-driven email optimization, and granular insights into email deliverability.
    2. Inbox Monster

    Inbox Monster is the go-to for marketers who require deep transparency into email deliverability metrics. The tool specializes in deliverability testing and gives real-time insights into where your emails are landing—whether it’s the inbox, ‘Promotions’ folder, or spam.
    It also offers spam trap monitoring, blacklist checking, and even time-sensitive post-send diagnostics, ensuring you get a complete picture of your email performance.
    Pricing: Inbox Monster operates on a subscription-based pricing model. Plans typically start at /month for smaller teams, with enterprise-level solutions available.
    Best for: Real-time DMARC and spam trap notifications
    3. Mailmodo

    Mailmodo offers interactive Accelerated Mobile Pagesemails, which are a great choice if you want to improve both deliverability and engagement in one fell swoop. It focuses heavily on keeping your emails out of spam folders by providing tools like spam test previews, inbox placement testing, and personalization options.
    Its interface is easy to use for marketers who want quick insights and actionable suggestions without needing to wade through overly complex data. Plus, the tool’s smart templates for AMP-based campaigns drive customer interactions without leaving the email itself.
    Pricing: Mailmodo offers transparent plans ranging between and per month for 500 contacts. For brands needing higher volume and premium features, the pricing scales accordingly.
    Best for: Free email authentication tools, such as DMARC, SPF and DKIM record checkers
    These three email deliverability software tools cater to different needs, whether you’re striving for enterprise-level optimization or interactive inbox experiences. Ultimately, selecting the right email deliverability tool depends on your goals, audience, and budget.

     
    Your Guide to Email Deliverability: Conclusion
    At the end of the day, email deliverability isn’t just about metrics; it’s about respect. Respecting your audience’s space, preferences, and limits goes a long way toward ensuring your emails land where they’re supposed to: the inbox.
    Want a tool that does it all and more? We give you…MoEngage!
    MoEngage doesn’t just focus on email deliverability; it drives customer interactions at every touchpoint. Now that’s something worth landing in inboxes for. Why not schedule a discovery call to see how MoEngage can do it for your campaigns?
    The post Email Deliverability: How to Dodge the Spam Folder appeared first on MoEngage.
    #email #deliverability #how #dodge #spam
    Email Deliverability: How to Dodge the Spam Folder
    Reading Time: 17 minutes Email marketing isn’t just about sending emails. Your emails could have the most compelling subject line, the perfect design, and personalized offers, and still land in your customers’ spam folders. You may call it ‘heartbreak.’ We call it an email deliverability issue. Email deliverability ensures your messages reach your audience where they’ll be read, not in the clutches of spam filters or ignored ‘Promotions’ tabs. Thankfully, this guide is here to arm you with actionable tactics to improve email marketing deliverability. But wait! Before diving headfirst into tactics and tools, let’s take a step back and address the basics of what exactly deliverability is, and how it’s different from email delivery.   What is Email Deliverability? Email deliverability is the ability of your emails to successfully land in your customers’ primary inboxes, rather than getting routed to spam folders or getting rejected. It evaluates whether your email reaches its destination, and if it’s considered trustworthy by mailbox providers like Gmail, Yahoo, or Outlook. Email Delivery vs. Email Deliverability: What’s the Difference? On the surface, email deliverability and email delivery may sound like interchangeable terms, but they’re far from identical. Let’s make it crystal clear: Email delivery means your email has been successfully accepted by your recipient’s email server. In other words, it’s like your email knocking on their door and being allowed into the house. Email deliverability, on the other hand, is what happens after that door is opened. Does your email land in the inbox? Or does it get tossed into the spam folder? Here’s an example to tie it all together: Imagine sending out 1,000 emails. If 950 of them successfully reach the targeted servers, you have an email delivery rate of 95%. But if only 750 of those emails end up in the inbox, your email deliverability rate is 75%. See the difference now? “But what does it matter anyway?” you ask. Why is Email Deliverability Important? Email deliverability is the backbone of every successful email marketing campaign. If your emails don’t land in recipients’ inboxes, your efforts won’t drive results. A strong deliverability strategy ensures your messages reach the right inboxes, increasing engagement and driving action. It also protects your sender reputation by monitoring key metrics like your sender score and IP reputation. Tools like MxToolbox and Talos can help you proactively check your domain and IP health. Ultimately, high email deliverability safeguards your reputation and maximizes the impact of your email marketing efforts.   What Affects Email Deliverability? Below, we’ll discuss the most important factors that can affect your email deliverability. Buckle up, marketers. This is about to get real. 1. Domain and IP Reputation Mailbox providersuse your domain or sender reputation to assess how trustworthy you are. Accordingly, they decide which folder to send your email to. If they see evidence that you send sketchy or unwanted emails, your reputation tanks faster than a celebrity caught in a scandal. And so does your ability to hit inboxes. 2. Number of Emails Sent More isn’t always better. Unless we’re talking about pizza or vacation days, of course. When it comes to email volumes, though, the quantity you’re sending can directly impact your email deliverability. If you send out too many emails too soon, mailbox providers might consider this overly aggressive behavior. They may slow down your email deliveryor flag you as spammy. 3. Email Content Does your email content look like it was written by a used car salesman from ‘95? You’re headed for trouble. Here’s the deal: Email providers analyze the content of every email to determine if it’s spammy garbage or something worth reaching an inbox. Things like excessive exclamation points, ALL CAPS, misleading subject lines, or overusing words like “FREE!!!” sound alarm bells. Oh, and including massive filesor embedding shady links? That’s like grinning with spinach stuck in your teeth. How does email file size affect deliverability, though? Well, it typically takes longer to download and see heavy emails, and that can impact email engagement. Mailbox providers like Gmail also clip messages weighing over 102 KB. The moral of the story is, your email file size should be less than 100 KB. No questions asked. 4. Email Sending Infrastructure Welcome to the tech part of email deliverability. Your email sending infrastructure includes things like: Proper domain authentication: Sender Policy Framework, DomainKeys Identified Mail, and Domain-based Message Authentication Reporting & Conformance. These security protocols prove to mailbox providers that you’re not a shady imposter trying to phish their customers. Dedicated IP addresses: These help establish consistency over time, especially for higher email volumes. Reliable Email Service Providers: Choose platforms that prioritize deliverability. In short, they should authenticate your emails, use reputable IPs, and comply with CAN-SPAM, GDPR, and other important regulations. 5. Performance Metrics and User Engagement Here’s where you have to face cold, hard facts: your audience’s reaction to your emails matters more than your own opinion of your email campaigns. Metrics like open rates, click-through rates, and spam complaint rates aren’t just vanity numbers. If mailbox providers notice that customers are ignoring your emails, deleting them without reading, or marking them as ‘spam’, they’ll quietly penalize you by deprioritizing all future emails you send. Translation: Lower engagement = Bad email deliverability. Speaking of which, it’s time to see what makes deliverability good or bad.   How Do You Measure Email Deliverability? Measuring email marketing deliverability is about understanding not only how many emails you’re sending, but also how many actually reach the intended inboxes and how well they perform when they do. To measure email deliverability, you’ll need a sharp eye for identifying patterns in your campaigns. Let’s dive into the most important aspects that impact how you measure and interpret deliverability. What is a Good Email Deliverability Rate? Okay, let’s address the elephant in the inbox: how good is good? In general terms, a “good” email deliverability rate ranges between 95% and 98%. If you’re hitting these numbers, pat yourself on the back—you’re doing far better than average. Anything below 90%, however, is definitely cause for concern, as it signals red flags in your sender reputation, email copy, or email list management tactics. That said, keep in mind that these numbers can vary depending on your industry, audience, and email strategy. For instance, Ecommerce brands targeting global audiences often deal with larger volumes and higher bounces. On the other hand, smaller B2C SaaS brands might maintain a higher deliverability baseline due to more niche, curated lists. What is an Email Deliverability Score and How Do You Use It? An email deliverability score is a score to assess your overall email performance based on factors like bounce rates, sender reputation, customer engagement, and list hygiene. Essentially, mailbox providers like Gmail and Outlook assign trust levels to sender domains and IP addresses, and the deliverability score helps you understand how hightheir trust in you is. Use this score strategically to: Audit the effectiveness of your latest email campaigns. Track long-term trends in reputation and performance. Adjust tactics, such as improving sender authenticity or list hygiene. Pro tip: Most email marketing platforms can provide you with a deliverability score right inside your dashboard. Sure, the score gives you a bird’s-eye view of the deliverability. But you need to dig deeper to get precise insights into what’s working and not working for your emails.   Top 5 Email Deliverability Metrics to Understand Campaign Performance Here are the five email deliverability metrics every savvy B2C marketer needs to analyze. Now let’s look at them in detail, shall we? 1. Open Rate The open rate measures the percentage of recipients who open your email after it lands in their inbox. It’s the first metric that indicates whether your subject line, brand, or sending name resonates enough with your audience to make them open your emails. Low open rates might mean your emails aren’t even making it to inboxesor that your subject lines need to get better at sparking curiosity. 2. Clickthrough RateThe CTR is the percentage of recipients who clicked on a link inside your email after opening it. This metric is a direct indicator of how well your email content engages the audience. Did they find value in your email, or did you lose their attention somewhere within your copy? 3. Spam Complaint Rate The spam complaint rate measures the percentage of recipients who mark your email as ‘spam’. This email marketing metric matters because mailbox providers take complaints very seriously, which might tank your sender reputation. 4. Unsubscribe Rate Unsubscribe rate tracks how many recipients opted out of receiving your emails. While it’s natural to see a few unsubscribes here and there, a sudden spike signals a disconnect between your email strategy and audience expectations. 5. Deliverability Rate The deliverability rate measures how many emails actually made it to the recipient’s inbox, compared to how many you initially sent. Recipients unsubscribing is essentially them telling you, “Thanks, but no thanks.” It’s a sign of misaligned targeting or irrelevant content and can provide insight into how to refine your audience segmentation.   3 Critical Email Deliverability Issues and Problems to Avoid Even the best marketers face roadblocks on the highway to the customer’s inbox. Here are three email deliverability issues to sidestep early: 1. How to Avoid Spam Filters in Email Marketing Spam filters are filters that catch anything they deem suspicious or irrelevant and banish it to the spam folder. For B2C marketers like you, this spells disaster. Emails flagged as spam don’t just fail to reach your audience; they damage your sender reputation and affect future deliverability. The good news? With the right practices, you can dodge spam filters and ensure your emails land in the inbox. “Pull Rather Than Push” The best way to avoid spam filters is to focus on engagement. Instead of spamming your users with hollow promotional material, entice them with rich, relevant, and consumable content that provides unmistakable value. Whether it’s exclusive offers or personalized updates, make emails worth opening. Maintain a Clean Email List Emailing outdated, invalid, or irrelevant addresses is your one-way ticket to spam territory. Worse, you might email a spam trap. It’s an intentionally invalid address designed to catch domains with poor email practices. Spam filters look for patterns, and a high volume of bounces or spam reports can slam your domain reputation, decreasing overall deliverability. One way of recognizing the issue is when your email open rates are drastically low. Create Content in the Right Balance and Tone Spam filters are deeply sensitive to the content of your email—everything from subject lines to text color can trigger red flags. To pass their strict scrutiny, your content must be clean, relevant, and balanced. What to Avoid Words like “FREE!!!” or “Make Millions Now” trigger alarms. Avoid excessive punctuation, caps lock, or promotional buzzwords. Keep content short, relevant, and readable. Long-winded emails frustrate customers and scream “spam” to filters. What to Focus On Add recipients’ names and offer content tailored to their activity or preferences. Optimize your emails for mobile. Most customers check emails on their phones, so ensure proper rendering on smaller screens. Include alt text for images and a plain-text version of every email for better compatibility. Build a Relationship with Subscribers Spam filters penalize cold, irrelevant communication, but fostering a respectful relationship with subscribers can work in your favor. Don’t ignore unsubscribe requests. Rather, respect their choice to opt out. Bombarding unsubscribers breeds resentment and spam complaints. Encourage subscribers to add your domain to their address book. This keeps your emails prioritized and inbox-friendly. Tailor communication around subscriber needs rather than what your company wants to push. Strong customer relationships not only improve email deliverability, but also create brand loyalty. That’s where customer relationship emails come into the picture. 2. How to Maintain a Healthy Email Domain Reputation Your email domain reputation is a key factor that mailbox providers use to decide whether your emails land in the inbox, spam folder, or get blocked altogether. A poor reputation, caused by spam complaints, bounces, or sending to spam traps, can destroy your deliverability and damage your brand’s credibility. To maintain your reputation, focus on consistent sending patterns, clean email lists, and authenticated domains using SPF, DKIM, and DMARC. Avoid sudden spikes in email volume, and ensure your content is relevant and personalized to improve engagement. 3. Avoid Email Spam Traps Spam traps or honeypots are invalid email addresses created by mailbox providers to identify and penalize senders with poor email list hygiene. These addresses don’t engage with emails and can enter contact lists through outdated addresses, typos, or unethical practices like scraping or purchasing lists. Hitting spam traps damages your sender reputation, affects email deliverability, and can result in emails landing in spam folders or being blocked altogether. Types of Email Spam Traps There are three main types of spam traps: Pristine Spam Traps: These are never-used addresses created to catch senders who purchase lists or scrape public forums. Recycled Traps: These are once-valid addresses repurposed into traps after prolonged inactivity and failure to suppress them. Typo Traps: These are email addresses containing user-input errors, like “gail.com” instead of “gmail.com.” Each of these indicates lapses in list hygiene or collection processes and contributes to deliverability issues. How to Identify Spam Traps Spam traps often point to declining email deliverability rates because these addresses never engage with emails or click through content. They are typically outdated, invalid, or associated with suspicious email collection methods. If you notice drops in deliverability metrics like increasing bounces or disengagement, tools like SpamCop can help detect these problematic addresses. How to Avoid Spam Traps Precision list management is key to minimizing spam traps. Double opt-ins verify addresses and ensure subscribers consent to communication. Send confirmation emails upon sign-up to validate authenticity, and consistently remove inactive subscribers to prevent recycled traps. Buying contact lists is a strict no-no, as these are a magnet for spam traps. If issues arise, validate your list using professional tools and re-permission campaigns to re-engage or suppress inactive contacts. A clean, permission-based list not only avoids spam traps, but also enhances engagement and deliverability.   How to Improve Email Deliverability: 9 Best Practices to Follow Improving deliverability requires discipline, tactics, and the right email deliverability service. Follow these eight tried-and-tested deliverability best practices to ensure your emails land in customers’ inboxes, and not the dreaded spam folder. 1. Confirm Sender Authentication Before anything else, authenticate your domain with SPF, DKIM, and DMARC. Without them, email providers might assume your campaigns are malicious and block them altogether. Work with your IT team or use platforms like MoEngage to configure these settings efficiently. MoEngage ensures your domain’s authentication is in place, enhancing your odds of bypassing spam filters. 2. Build a Clean Email List As we’ve mentioned before, high-quality email list is a non-negotiable for strong deliverability. Outdated or purchased lists kill your reputation faster than a bad joke at a standup gig. Here are a few pointers to help you clean and grow your email list regularly: Use double opt-ins to ensure new subscribers genuinely want your emails. Regularly remove bounced, invalid, and unengaged addresses from your list. Never buy or scrape email lists. It’s unethical, and in some regions, outright illegal. 3. Get the Email Copy Right Email content dictates how your audience connects with your brand. Poorly written, overly spammy-looking content will ruin your reputation with both readers and ESPs. Craft personalized, value-driven copy. Steer clear of all-caps headlines, exclamation overload, or spammy language. A/B test your emails to identify what resonates best with your audience. 4. Use Fewer Images Yes, visuals are powerful, but they can also slow down your email’s load time and raise red flags for spam filters, especially if you add multiple clickable elements. Keep your email design balanced with a higher ratio of text to images. A 40:60 text-to-image ratio is considered ideal. Use alt text for each image to ensure accessibility, in case images don’t fully render. Maintain standard HTML email template sizes — it’s 600 pixels for desktops, 320 pixels for vertical, and 480 pixels for horizontal views on mobile devices. Keep your HTML code light and clean without any hidden images or URLs. 5. Reduce the Number of Links Bombarding readers with more links than necessary doesn’t just look spammy, but can also tank deliverability. Spam filters are especially ruthless if your links come from too many domains. Stick to one or two actionable, well-placed links per email. Avoid URL shorteners or unverified domains. Don’t forget to preview your email to ensure all hyperlinks are functional and minimal. 6. Segment Your List Generic email blasts, move over. Segmentation allows you to send tailored content based on customer preferences and behavior. Use demographic and behavioral data to create focused email segments. MoEngage’s deep Recency, Frequency, and Monetary Valuesegmentation capabilities help you craft hyper-personalized campaigns that boost clickthrough and engagement rates. 7. Use Real Names and Email Addresses Never underestimate the human touch. Blatantly promotional email addresses like ‘promotions@mybusiness.com’ are easily classified as such and are usually placed in the ‘Promotions’ tab. Using real people’s names instead, like ‘John Doe’ and ‘jdoe@mystore.com’, suggests the sender might be a human being, and the content might not be promotional. 8. Comply with Email Deliverability Regulations Ignoring compliance laws like the General Data Protection Regulationand CAN-SPAM? Nuh-uh. You don’t want your domain to get blacklisted just because you failed to meet a few legal requirements, do you? Always include an opt-out link, honor unsubscribe requests immediately, and only email customers who’ve explicitly given consent to receive your emails. 9. Set Up Brand Indicators for Message IdentificationBIMI is the new gold star for email branding. It lets you display your brand logo next to your emails, boosting trust and recognition, and improving email deliverability. Configure BIMI, which requires a combination of domain authentication and logo verification. Adding this layer increases the likelihood of your emails being opened and decreases spam suspicion.   How to Increase Email Deliverability for Gmail, Yahoo, and Outlook Gmail, Yahoo, and Outlook have enforced strict guidelines to keep spam out and ensure better inbox experiences for subscribers. For brands, this means staying compliant or losing precious inbox placement. Let’s break it down. First things first: your authentication game needs to be airtight. If you’re sending more than 5,000 emails a day, a DMARC policy is non-negotiable. It’s also crucial that the domain in your “From” address is configured with the SPF, DKIM, and DMARC protocols. Additionally, valid forward and reverse DNS/PTR records are required for your email sending domains and IPs. Platforms like MoEngage handle much of this for you during onboarding, but it’s worth double-checking if you’ve added new sending addresses or use your own ESP. Then, make unsubscribing effortless. Both Gmail and Yahoo demand clear, visible, one-click unsubscribe links and mandate that opt-out requests be honored within 48 hours. With MoEngage, this functionality is automatically included for compliant campaigns. If you’re using your own custom setup, ensure subscription tracking is enabled and working in real time to avoid trouble. Finally, you must keep spam rates in check. Yahoo and Gmail expect a spam rate below 0.10%, while hitting 0.30% could spell major email deliverability issues. Prevent this by sticking to opt-in subscribers, sending periodic consent confirmation emails, regularly cleaning your lists, and suppressing unengaged subscribers. Maintaining dynamic sending patterns based on customer interests and behavior also helps you adhere to updated compliance standards. By aligning your email drip campaigns with these rules, you’ll ensure maximum inbox placement, keep spam filters at bay, and boost your customer engagement.   How to Conduct an Email Deliverability Test Running an email deliverability test can help you identify potential issues before they wreak havoc on your campaigns. Whether it’s an email authentication problem, unengaged recipients, orspam-triggering mistakes, running these tests ensures your emails make their way into inboxes smoothly. Below, we’ll give you a detailed walkthrough of 5 tips to help you ace your email deliverability testing game. 1. Use Seed Email Lists A seed list is essentially a mini-audience of test email addresses from various providers. Sending your email to this group allows you to measure whether it lands in the inbox, Promotional tab, or the spam folder. How does this help? It gives you insights into any immediate deliverability issues tied to specific providers or domains. For example, if your test email lands in Yahoo’s spam folder, you can troubleshoot the issuebefore it impacts a broader audience. Fixing these bugs early can boost your email deliverability rate and protect your sender reputation. 2. Monitor Email Authentication ProtocolsDuring your email deliverability test, check each security protocol to ensure the email is passing authentication checks, so it comes across as legitimate and trustworthy. A properly authenticated email drastically improves your chances of landing in an inbox. 3. Test Subject Lines and Preheaders Your subject line sets the tone and determines whether someone opens your email or sends it to email purgatory. When testing email deliverability, keep an eye on how different subject lines perform. Use A/B testing tools to experiment with variations and select one that is catchy, yet non-spammy. Combined with preheader tweaks, this can optimize both inbox placement and engagement metrics. 4. Check for Blacklist IssuesEmail blacklists are essentially “no-fly lists” for senders known to spam inboxes. Even the best intentions can land you here if you’re not careful. Before you send any email campaign, you should run an email blacklist check using tools like MXToolbox. Identifying and resolving blacklist issues early significantly reduces the risk of your email being blocked at the server level. Plus, it helps you maintain your all-important sender reputation. 5. Evaluate Email Engagement Metrics Early Sure, email deliverability testing mostly focuses on getting your email into inboxes, but don’t ignore engagement metrics like open rates and clicks during your test phase. Why? Because customer behavior heavily influences what mailbox providers like Gmail think of your emails. Low engagement during testing could suggest your content or timing needs refinement. Test to identify whether certain segments of your subscribersimpact email deliverability performance. Based on these insights, you can remove unengaged subscribers or reconnect them with tailored re-engagement email campaigns later, creating a more engaged and clean list.  Email Deliverability Checklist: Make Sure You’re Prepared Nail your campaigns with these foolproof steps made for B2C marketers to ensure email deliverability.   3 Best Email Deliverability Tools to Ensure Your Emails Reach Your Customers’ Inboxes From ensuring domain authentication to tracking reputation scores, there are email software platforms to equip you with the insights and features needed to consistently hit inboxes. Here are the 3 best email deliverability tools to consider for your campaigns. 1. MoEngage *shy giggling* We’re honest. *eyelash batting* Seriously, MoEngage is a powerhouse designed specifically for B2C marketers working in industries like Ecommerce, fintech, QSR, and media. While it’s much more than just an email deliverability tool, MoEngage shines by offering advanced deliverability monitoring features. With real-time analytics, automation tools, and even AI-driven delivery optimization, you’ll know when, where, and why your emails are or aren’t landing in inboxes. It lets you filter for email bot opens and set up double opt-in with advanced configuration options. It also offers email deliverability services, including setting up proper authentications, email strategy discussions, assistance with inbox placement/bulking issues, troubleshooting blogs and blacklisting issues, content analysis, and reviewing industry best practices. Pricing: MoEngage offers customized pricing based on business size and requirements, with growth plans starting from /month. Its premium plans offer advanced segmentation, AI-powered optimizations, in-depth analytics, and a lot more. Best for: Dynamic audience segmentation, AI-driven email optimization, and granular insights into email deliverability. 2. Inbox Monster Inbox Monster is the go-to for marketers who require deep transparency into email deliverability metrics. The tool specializes in deliverability testing and gives real-time insights into where your emails are landing—whether it’s the inbox, ‘Promotions’ folder, or spam. It also offers spam trap monitoring, blacklist checking, and even time-sensitive post-send diagnostics, ensuring you get a complete picture of your email performance. Pricing: Inbox Monster operates on a subscription-based pricing model. Plans typically start at /month for smaller teams, with enterprise-level solutions available. Best for: Real-time DMARC and spam trap notifications 3. Mailmodo Mailmodo offers interactive Accelerated Mobile Pagesemails, which are a great choice if you want to improve both deliverability and engagement in one fell swoop. It focuses heavily on keeping your emails out of spam folders by providing tools like spam test previews, inbox placement testing, and personalization options. Its interface is easy to use for marketers who want quick insights and actionable suggestions without needing to wade through overly complex data. Plus, the tool’s smart templates for AMP-based campaigns drive customer interactions without leaving the email itself. Pricing: Mailmodo offers transparent plans ranging between and per month for 500 contacts. For brands needing higher volume and premium features, the pricing scales accordingly. Best for: Free email authentication tools, such as DMARC, SPF and DKIM record checkers These three email deliverability software tools cater to different needs, whether you’re striving for enterprise-level optimization or interactive inbox experiences. Ultimately, selecting the right email deliverability tool depends on your goals, audience, and budget.   Your Guide to Email Deliverability: Conclusion At the end of the day, email deliverability isn’t just about metrics; it’s about respect. Respecting your audience’s space, preferences, and limits goes a long way toward ensuring your emails land where they’re supposed to: the inbox. Want a tool that does it all and more? We give you…MoEngage! MoEngage doesn’t just focus on email deliverability; it drives customer interactions at every touchpoint. Now that’s something worth landing in inboxes for. Why not schedule a discovery call to see how MoEngage can do it for your campaigns? The post Email Deliverability: How to Dodge the Spam Folder appeared first on MoEngage. #email #deliverability #how #dodge #spam
    WWW.MOENGAGE.COM
    Email Deliverability: How to Dodge the Spam Folder
    Reading Time: 17 minutes Email marketing isn’t just about sending emails. Your emails could have the most compelling subject line, the perfect design, and personalized offers, and still land in your customers’ spam folders. You may call it ‘heartbreak.’ We call it an email deliverability issue. Email deliverability ensures your messages reach your audience where they’ll be read, not in the clutches of spam filters or ignored ‘Promotions’ tabs. Thankfully, this guide is here to arm you with actionable tactics to improve email marketing deliverability. But wait! Before diving headfirst into tactics and tools, let’s take a step back and address the basics of what exactly deliverability is, and how it’s different from email delivery.   What is Email Deliverability? Email deliverability is the ability of your emails to successfully land in your customers’ primary inboxes, rather than getting routed to spam folders or getting rejected. It evaluates whether your email reaches its destination, and if it’s considered trustworthy by mailbox providers like Gmail, Yahoo, or Outlook. Email Delivery vs. Email Deliverability: What’s the Difference? On the surface, email deliverability and email delivery may sound like interchangeable terms, but they’re far from identical. Let’s make it crystal clear: Email delivery means your email has been successfully accepted by your recipient’s email server. In other words, it’s like your email knocking on their door and being allowed into the house. Email deliverability, on the other hand, is what happens after that door is opened. Does your email land in the inbox? Or does it get tossed into the spam folder? Here’s an example to tie it all together: Imagine sending out 1,000 emails. If 950 of them successfully reach the targeted servers, you have an email delivery rate of 95%. But if only 750 of those emails end up in the inbox, your email deliverability rate is 75%. See the difference now? “But what does it matter anyway?” you ask. Why is Email Deliverability Important? Email deliverability is the backbone of every successful email marketing campaign. If your emails don’t land in recipients’ inboxes, your efforts won’t drive results. A strong deliverability strategy ensures your messages reach the right inboxes, increasing engagement and driving action. It also protects your sender reputation by monitoring key metrics like your sender score and IP reputation. Tools like MxToolbox and Talos can help you proactively check your domain and IP health. Ultimately, high email deliverability safeguards your reputation and maximizes the impact of your email marketing efforts.   What Affects Email Deliverability? Below, we’ll discuss the most important factors that can affect your email deliverability. Buckle up, marketers. This is about to get real. 1. Domain and IP Reputation Mailbox providers (think Gmail, Yahoo, or Outlook) use your domain or sender reputation to assess how trustworthy you are. Accordingly, they decide which folder to send your email to. If they see evidence that you send sketchy or unwanted emails (read: spam), your reputation tanks faster than a celebrity caught in a scandal. And so does your ability to hit inboxes. 2. Number of Emails Sent More isn’t always better. Unless we’re talking about pizza or vacation days, of course. When it comes to email volumes, though, the quantity you’re sending can directly impact your email deliverability. If you send out too many emails too soon, mailbox providers might consider this overly aggressive behavior. They may slow down your email delivery (throttling) or flag you as spammy. 3. Email Content Does your email content look like it was written by a used car salesman from ‘95? You’re headed for trouble. Here’s the deal: Email providers analyze the content of every email to determine if it’s spammy garbage or something worth reaching an inbox. Things like excessive exclamation points, ALL CAPS, misleading subject lines, or overusing words like “FREE!!!” sound alarm bells. Oh, and including massive files (like 15MB GIFs) or embedding shady links? That’s like grinning with spinach stuck in your teeth. How does email file size affect deliverability, though? Well, it typically takes longer to download and see heavy emails, and that can impact email engagement. Mailbox providers like Gmail also clip messages weighing over 102 KB. The moral of the story is, your email file size should be less than 100 KB. No questions asked. 4. Email Sending Infrastructure Welcome to the tech part of email deliverability. Your email sending infrastructure includes things like: Proper domain authentication (SPF, DKIM, and DMARC): Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication Reporting & Conformance (DMARC). These security protocols prove to mailbox providers that you’re not a shady imposter trying to phish their customers. Dedicated IP addresses: These help establish consistency over time, especially for higher email volumes. Reliable Email Service Providers (ESPs): Choose platforms that prioritize deliverability. In short, they should authenticate your emails, use reputable IPs, and comply with CAN-SPAM, GDPR, and other important regulations. 5. Performance Metrics and User Engagement Here’s where you have to face cold, hard facts: your audience’s reaction to your emails matters more than your own opinion of your email campaigns. Metrics like open rates, click-through rates (CTR), and spam complaint rates aren’t just vanity numbers. If mailbox providers notice that customers are ignoring your emails, deleting them without reading, or marking them as ‘spam’ (ouch), they’ll quietly penalize you by deprioritizing all future emails you send. Translation: Lower engagement = Bad email deliverability. Speaking of which, it’s time to see what makes deliverability good or bad.   How Do You Measure Email Deliverability? Measuring email marketing deliverability is about understanding not only how many emails you’re sending, but also how many actually reach the intended inboxes and how well they perform when they do. To measure email deliverability, you’ll need a sharp eye for identifying patterns in your campaigns. Let’s dive into the most important aspects that impact how you measure and interpret deliverability. What is a Good Email Deliverability Rate? Okay, let’s address the elephant in the inbox: how good is good? In general terms, a “good” email deliverability rate ranges between 95% and 98%. If you’re hitting these numbers, pat yourself on the back—you’re doing far better than average. Anything below 90%, however, is definitely cause for concern, as it signals red flags in your sender reputation, email copy, or email list management tactics. That said, keep in mind that these numbers can vary depending on your industry, audience, and email strategy. For instance, Ecommerce brands targeting global audiences often deal with larger volumes and higher bounces. On the other hand, smaller B2C SaaS brands might maintain a higher deliverability baseline due to more niche, curated lists. What is an Email Deliverability Score and How Do You Use It? An email deliverability score is a score to assess your overall email performance based on factors like bounce rates, sender reputation, customer engagement, and list hygiene. Essentially, mailbox providers like Gmail and Outlook assign trust levels to sender domains and IP addresses, and the deliverability score helps you understand how high (or low) their trust in you is. Use this score strategically to: Audit the effectiveness of your latest email campaigns. Track long-term trends in reputation and performance. Adjust tactics, such as improving sender authenticity or list hygiene. Pro tip: Most email marketing platforms can provide you with a deliverability score right inside your dashboard. Sure, the score gives you a bird’s-eye view of the deliverability. But you need to dig deeper to get precise insights into what’s working and not working for your emails.   Top 5 Email Deliverability Metrics to Understand Campaign Performance Here are the five email deliverability metrics every savvy B2C marketer needs to analyze. Now let’s look at them in detail, shall we? 1. Open Rate The open rate measures the percentage of recipients who open your email after it lands in their inbox. It’s the first metric that indicates whether your subject line, brand, or sending name resonates enough with your audience to make them open your emails. Low open rates might mean your emails aren’t even making it to inboxes (hello, spam folder!) or that your subject lines need to get better at sparking curiosity. 2. Clickthrough Rate (CTR) The CTR is the percentage of recipients who clicked on a link inside your email after opening it. This metric is a direct indicator of how well your email content engages the audience. Did they find value in your email, or did you lose their attention somewhere within your copy? 3. Spam Complaint Rate The spam complaint rate measures the percentage of recipients who mark your email as ‘spam’. This email marketing metric matters because mailbox providers take complaints very seriously, which might tank your sender reputation. 4. Unsubscribe Rate Unsubscribe rate tracks how many recipients opted out of receiving your emails. While it’s natural to see a few unsubscribes here and there, a sudden spike signals a disconnect between your email strategy and audience expectations (think irrelevant content or too-frequent emails). 5. Deliverability Rate The deliverability rate measures how many emails actually made it to the recipient’s inbox (or spam folder), compared to how many you initially sent. Recipients unsubscribing is essentially them telling you, “Thanks, but no thanks.” It’s a sign of misaligned targeting or irrelevant content and can provide insight into how to refine your audience segmentation.   3 Critical Email Deliverability Issues and Problems to Avoid Even the best marketers face roadblocks on the highway to the customer’s inbox. Here are three email deliverability issues to sidestep early: 1. How to Avoid Spam Filters in Email Marketing Spam filters are filters that catch anything they deem suspicious or irrelevant and banish it to the spam folder. For B2C marketers like you, this spells disaster. Emails flagged as spam don’t just fail to reach your audience; they damage your sender reputation and affect future deliverability. The good news? With the right practices, you can dodge spam filters and ensure your emails land in the inbox. “Pull Rather Than Push” The best way to avoid spam filters is to focus on engagement. Instead of spamming your users with hollow promotional material, entice them with rich, relevant, and consumable content that provides unmistakable value. Whether it’s exclusive offers or personalized updates, make emails worth opening. Maintain a Clean Email List Emailing outdated, invalid, or irrelevant addresses is your one-way ticket to spam territory. Worse, you might email a spam trap. It’s an intentionally invalid address designed to catch domains with poor email practices (more on that later). Spam filters look for patterns, and a high volume of bounces or spam reports can slam your domain reputation, decreasing overall deliverability. One way of recognizing the issue is when your email open rates are drastically low. Create Content in the Right Balance and Tone Spam filters are deeply sensitive to the content of your email—everything from subject lines to text color can trigger red flags. To pass their strict scrutiny, your content must be clean, relevant, and balanced. What to Avoid Words like “FREE!!!” or “Make Millions Now” trigger alarms. Avoid excessive punctuation, caps lock, or promotional buzzwords. Keep content short, relevant, and readable. Long-winded emails frustrate customers and scream “spam” to filters. What to Focus On Add recipients’ names and offer content tailored to their activity or preferences. Optimize your emails for mobile. Most customers check emails on their phones, so ensure proper rendering on smaller screens. Include alt text for images and a plain-text version of every email for better compatibility. Build a Relationship with Subscribers Spam filters penalize cold, irrelevant communication, but fostering a respectful relationship with subscribers can work in your favor. Don’t ignore unsubscribe requests. Rather, respect their choice to opt out. Bombarding unsubscribers breeds resentment and spam complaints. Encourage subscribers to add your domain to their address book. This keeps your emails prioritized and inbox-friendly. Tailor communication around subscriber needs rather than what your company wants to push. Strong customer relationships not only improve email deliverability, but also create brand loyalty. That’s where customer relationship emails come into the picture. 2. How to Maintain a Healthy Email Domain Reputation Your email domain reputation is a key factor that mailbox providers use to decide whether your emails land in the inbox, spam folder, or get blocked altogether. A poor reputation, caused by spam complaints, bounces, or sending to spam traps, can destroy your deliverability and damage your brand’s credibility. To maintain your reputation, focus on consistent sending patterns, clean email lists, and authenticated domains using SPF, DKIM, and DMARC. Avoid sudden spikes in email volume, and ensure your content is relevant and personalized to improve engagement. 3. Avoid Email Spam Traps Spam traps or honeypots are invalid email addresses created by mailbox providers to identify and penalize senders with poor email list hygiene. These addresses don’t engage with emails and can enter contact lists through outdated addresses, typos, or unethical practices like scraping or purchasing lists. Hitting spam traps damages your sender reputation, affects email deliverability, and can result in emails landing in spam folders or being blocked altogether. Types of Email Spam Traps There are three main types of spam traps: Pristine Spam Traps: These are never-used addresses created to catch senders who purchase lists or scrape public forums. Recycled Traps: These are once-valid addresses repurposed into traps after prolonged inactivity and failure to suppress them. Typo Traps: These are email addresses containing user-input errors, like “gail.com” instead of “gmail.com.” Each of these indicates lapses in list hygiene or collection processes and contributes to deliverability issues. How to Identify Spam Traps Spam traps often point to declining email deliverability rates because these addresses never engage with emails or click through content. They are typically outdated, invalid, or associated with suspicious email collection methods. If you notice drops in deliverability metrics like increasing bounces or disengagement, tools like SpamCop can help detect these problematic addresses. How to Avoid Spam Traps Precision list management is key to minimizing spam traps. Double opt-ins verify addresses and ensure subscribers consent to communication. Send confirmation emails upon sign-up to validate authenticity, and consistently remove inactive subscribers to prevent recycled traps. Buying contact lists is a strict no-no, as these are a magnet for spam traps. If issues arise, validate your list using professional tools and re-permission campaigns to re-engage or suppress inactive contacts. A clean, permission-based list not only avoids spam traps, but also enhances engagement and deliverability.   How to Improve Email Deliverability: 9 Best Practices to Follow Improving deliverability requires discipline, tactics, and the right email deliverability service. Follow these eight tried-and-tested deliverability best practices to ensure your emails land in customers’ inboxes, and not the dreaded spam folder. 1. Confirm Sender Authentication Before anything else, authenticate your domain with SPF, DKIM, and DMARC. Without them, email providers might assume your campaigns are malicious and block them altogether. Work with your IT team or use platforms like MoEngage to configure these settings efficiently. MoEngage ensures your domain’s authentication is in place, enhancing your odds of bypassing spam filters. 2. Build a Clean Email List As we’ve mentioned before, high-quality email list is a non-negotiable for strong deliverability. Outdated or purchased lists kill your reputation faster than a bad joke at a standup gig. Here are a few pointers to help you clean and grow your email list regularly: Use double opt-ins to ensure new subscribers genuinely want your emails. Regularly remove bounced, invalid, and unengaged addresses from your list. Never buy or scrape email lists. It’s unethical, and in some regions, outright illegal. 3. Get the Email Copy Right Email content dictates how your audience connects with your brand (or doesn’t). Poorly written, overly spammy-looking content will ruin your reputation with both readers and ESPs. Craft personalized, value-driven copy. Steer clear of all-caps headlines, exclamation overload, or spammy language (we can’t seem to stress this enough in this email deliverability guide). A/B test your emails to identify what resonates best with your audience. 4. Use Fewer Images Yes, visuals are powerful, but they can also slow down your email’s load time and raise red flags for spam filters, especially if you add multiple clickable elements (like buttons). Keep your email design balanced with a higher ratio of text to images. A 40:60 text-to-image ratio is considered ideal. Use alt text for each image to ensure accessibility, in case images don’t fully render. Maintain standard HTML email template sizes — it’s 600 pixels for desktops, 320 pixels for vertical, and 480 pixels for horizontal views on mobile devices. Keep your HTML code light and clean without any hidden images or URLs. 5. Reduce the Number of Links Bombarding readers with more links than necessary doesn’t just look spammy, but can also tank deliverability. Spam filters are especially ruthless if your links come from too many domains. Stick to one or two actionable, well-placed links per email. Avoid URL shorteners or unverified domains. Don’t forget to preview your email to ensure all hyperlinks are functional and minimal. 6. Segment Your List Generic email blasts, move over. Segmentation allows you to send tailored content based on customer preferences and behavior. Use demographic and behavioral data to create focused email segments. MoEngage’s deep Recency, Frequency, and Monetary Value (RFM) segmentation capabilities help you craft hyper-personalized campaigns that boost clickthrough and engagement rates. 7. Use Real Names and Email Addresses Never underestimate the human touch. Blatantly promotional email addresses like ‘promotions@mybusiness.com’ are easily classified as such and are usually placed in the ‘Promotions’ tab. Using real people’s names instead, like ‘John Doe’ and ‘jdoe@mystore.com’, suggests the sender might be a human being, and the content might not be promotional. 8. Comply with Email Deliverability Regulations Ignoring compliance laws like the General Data Protection Regulation (GDPR) and CAN-SPAM? Nuh-uh. You don’t want your domain to get blacklisted just because you failed to meet a few legal requirements, do you? Always include an opt-out link, honor unsubscribe requests immediately, and only email customers who’ve explicitly given consent to receive your emails. 9. Set Up Brand Indicators for Message Identification (BIMI) BIMI is the new gold star for email branding. It lets you display your brand logo next to your emails, boosting trust and recognition, and improving email deliverability. Configure BIMI, which requires a combination of domain authentication and logo verification. Adding this layer increases the likelihood of your emails being opened and decreases spam suspicion.   How to Increase Email Deliverability for Gmail, Yahoo, and Outlook Gmail, Yahoo, and Outlook have enforced strict guidelines to keep spam out and ensure better inbox experiences for subscribers. For brands, this means staying compliant or losing precious inbox placement. Let’s break it down. First things first: your authentication game needs to be airtight. If you’re sending more than 5,000 emails a day, a DMARC policy is non-negotiable. It’s also crucial that the domain in your “From” address is configured with the SPF, DKIM, and DMARC protocols. Additionally, valid forward and reverse DNS/PTR records are required for your email sending domains and IPs. Platforms like MoEngage handle much of this for you during onboarding, but it’s worth double-checking if you’ve added new sending addresses or use your own ESP. Then, make unsubscribing effortless. Both Gmail and Yahoo demand clear, visible, one-click unsubscribe links and mandate that opt-out requests be honored within 48 hours. With MoEngage, this functionality is automatically included for compliant campaigns. If you’re using your own custom setup, ensure subscription tracking is enabled and working in real time to avoid trouble. Finally, you must keep spam rates in check. Yahoo and Gmail expect a spam rate below 0.10%, while hitting 0.30% could spell major email deliverability issues. Prevent this by sticking to opt-in subscribers, sending periodic consent confirmation emails, regularly cleaning your lists, and suppressing unengaged subscribers. Maintaining dynamic sending patterns based on customer interests and behavior also helps you adhere to updated compliance standards. By aligning your email drip campaigns with these rules, you’ll ensure maximum inbox placement, keep spam filters at bay, and boost your customer engagement.   How to Conduct an Email Deliverability Test Running an email deliverability test can help you identify potential issues before they wreak havoc on your campaigns. Whether it’s an email authentication problem, unengaged recipients, or (gasp!) spam-triggering mistakes, running these tests ensures your emails make their way into inboxes smoothly. Below, we’ll give you a detailed walkthrough of 5 tips to help you ace your email deliverability testing game. 1. Use Seed Email Lists A seed list is essentially a mini-audience of test email addresses from various providers (like Gmail, Yahoo, and Outlook). Sending your email to this group allows you to measure whether it lands in the inbox, Promotional tab, or the spam folder. How does this help? It gives you insights into any immediate deliverability issues tied to specific providers or domains. For example, if your test email lands in Yahoo’s spam folder, you can troubleshoot the issue (such as tweaking the content or verifying authentication settings) before it impacts a broader audience. Fixing these bugs early can boost your email deliverability rate and protect your sender reputation. 2. Monitor Email Authentication Protocols (SPF, DKIM, and DMARC) During your email deliverability test, check each security protocol to ensure the email is passing authentication checks, so it comes across as legitimate and trustworthy. A properly authenticated email drastically improves your chances of landing in an inbox. 3. Test Subject Lines and Preheaders Your subject line sets the tone and determines whether someone opens your email or sends it to email purgatory. When testing email deliverability, keep an eye on how different subject lines perform. Use A/B testing tools to experiment with variations and select one that is catchy, yet non-spammy. Combined with preheader tweaks (essentially your preview text), this can optimize both inbox placement and engagement metrics. 4. Check for Blacklist Issues (And Stay Off Them) Email blacklists are essentially “no-fly lists” for senders known to spam inboxes. Even the best intentions can land you here if you’re not careful (think: sending to outdated email lists or ignoring proper opt-in procedures). Before you send any email campaign, you should run an email blacklist check using tools like MXToolbox. Identifying and resolving blacklist issues early significantly reduces the risk of your email being blocked at the server level. Plus, it helps you maintain your all-important sender reputation. 5. Evaluate Email Engagement Metrics Early Sure, email deliverability testing mostly focuses on getting your email into inboxes, but don’t ignore engagement metrics like open rates and clicks during your test phase. Why? Because customer behavior heavily influences what mailbox providers like Gmail think of your emails. Low engagement during testing could suggest your content or timing needs refinement. Test to identify whether certain segments of your subscribers (e.g., inactive subscribers) impact email deliverability performance. Based on these insights, you can remove unengaged subscribers or reconnect them with tailored re-engagement email campaigns later, creating a more engaged and clean list.   [Infographic] Email Deliverability Checklist: Make Sure You’re Prepared Nail your campaigns with these foolproof steps made for B2C marketers to ensure email deliverability.   3 Best Email Deliverability Tools to Ensure Your Emails Reach Your Customers’ Inboxes From ensuring domain authentication to tracking reputation scores, there are email software platforms to equip you with the insights and features needed to consistently hit inboxes. Here are the 3 best email deliverability tools to consider for your campaigns. 1. MoEngage *shy giggling* We’re honest. *eyelash batting* Seriously, MoEngage is a powerhouse designed specifically for B2C marketers working in industries like Ecommerce, fintech, QSR, and media. While it’s much more than just an email deliverability tool (think of it as a complete customer engagement and retention platform), MoEngage shines by offering advanced deliverability monitoring features. With real-time analytics, automation tools, and even AI-driven delivery optimization, you’ll know when, where, and why your emails are or aren’t landing in inboxes. It lets you filter for email bot opens and set up double opt-in with advanced configuration options. It also offers email deliverability services, including setting up proper authentications, email strategy discussions, assistance with inbox placement/bulking issues, troubleshooting blogs and blacklisting issues, content analysis, and reviewing industry best practices. Pricing: MoEngage offers customized pricing based on business size and requirements, with growth plans starting from $750/month. Its premium plans offer advanced segmentation, AI-powered optimizations, in-depth analytics, and a lot more. Best for: Dynamic audience segmentation, AI-driven email optimization, and granular insights into email deliverability. 2. Inbox Monster Inbox Monster is the go-to for marketers who require deep transparency into email deliverability metrics. The tool specializes in deliverability testing and gives real-time insights into where your emails are landing—whether it’s the inbox, ‘Promotions’ folder, or spam. It also offers spam trap monitoring, blacklist checking, and even time-sensitive post-send diagnostics, ensuring you get a complete picture of your email performance. Pricing: Inbox Monster operates on a subscription-based pricing model. Plans typically start at $79/month for smaller teams, with enterprise-level solutions available. Best for: Real-time DMARC and spam trap notifications 3. Mailmodo Mailmodo offers interactive Accelerated Mobile Pages (AMP) emails, which are a great choice if you want to improve both deliverability and engagement in one fell swoop. It focuses heavily on keeping your emails out of spam folders by providing tools like spam test previews, inbox placement testing, and personalization options. Its interface is easy to use for marketers who want quick insights and actionable suggestions without needing to wade through overly complex data. Plus, the tool’s smart templates for AMP-based campaigns drive customer interactions without leaving the email itself. Pricing: Mailmodo offers transparent plans ranging between $39 and $159 per month for 500 contacts. For brands needing higher volume and premium features, the pricing scales accordingly. Best for: Free email authentication tools, such as DMARC, SPF and DKIM record checkers These three email deliverability software tools cater to different needs, whether you’re striving for enterprise-level optimization or interactive inbox experiences. Ultimately, selecting the right email deliverability tool depends on your goals, audience, and budget.   Your Guide to Email Deliverability: Conclusion At the end of the day, email deliverability isn’t just about metrics; it’s about respect. Respecting your audience’s space, preferences, and limits goes a long way toward ensuring your emails land where they’re supposed to: the inbox. Want a tool that does it all and more? We give you… (drumroll) MoEngage! MoEngage doesn’t just focus on email deliverability; it drives customer interactions at every touchpoint. Now that’s something worth landing in inboxes for. Why not schedule a discovery call to see how MoEngage can do it for your campaigns? The post Email Deliverability: How to Dodge the Spam Folder appeared first on MoEngage.
    0 Комментарии 0 Поделились
  • Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer

    AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Picture created by the author using various tools including Dall-E3 via ChatGPT.

    Large Language Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect. Errors, bugs and hallucinations happen all the time, and it might even happen that the code runs well but… it’s not doing exactly what we wanted.

    Now, imagine an AI that doesn’t just write code based on what it’s seen, but actively evolves it. To a first surprise, this means you increase the chances of getting the right code written; however, it goes far beyond: Google showed that it can also use such AI methodology to discover new algorithms that are faster, more efficient, and sometimes, entirely new.

    I’m talking about AlphaEvolve, the recent bombshell from Google DeepMind. Let me say it again: it isn’t just another code generator, but rather a system that generates and evolves code, allowing it to discover new algorithms. Powered by Google’s formidable Gemini models, AlphaEvolve could revolutionize how we approach coding, mathematics, algorithm design, and why not data analysis itself.

    How Does AlphaEvolve ‘Evolve’ Code?

    Think of it like natural selection, but for software. That is, think about Genetic Algorithms, which have existed in data science, numerical methods and computational mathematics for decades. Briefly, instead of starting from scratch every time, AlphaEvolve takes an initial piece of code – possibly a “skeleton” provided by a human, with specific areas marked for improvement – and then runs on it an iterative process of refinement.

    Let me summarize here the procedure detailed in Deepmind’s white paper:

    Intelligent prompting: AlphaEvolve is “smart” enough to craft its own prompts for the underlying Gemini Llm. These prompts instruct Gemini to act like a world-class expert in a specific domain, armed with context from previous attempts, including the points that seemed to have worked correctly and those that are clear failures. This is where those massive context windows of models like Geminicome into play.

    Creative mutation: The LLM then generates a diverse pool of “candidate” solutions – variations and mutations of the original code, exploring different approaches to solve the given problem. This parallels very closely the inner working of regular genetic algorithms.

    Survival of the fittest: Again like in genetic algorithms, but candidate solutions are automatically compiled, run, and rigorously evaluated against predefined metrics.

    Breeding of the top programs: The best-performing solutions are selected and become the “parents” for a next generation, just like in genetic algorithms. The successful traits of the parent programs are fed back into the prompting mechanism.

    Repeat: This cycle – generate, test, select, learn – repeats, and with each iteration, AlphaEvolve explores the vast search space of possible programs thus gradually homing in on solutions that are better and better, while purging those that fail. The longer you let it run, the more sophisticated and optimized the solutions can become.

    Building on Previous Attempts

    AlphaEvolve is the successor to earlier Google projects like AlphaCodeand, more directly, of FunSearch. FunSearch was a fascinating proof of concept that showed how LLMs could discover new mathematical insights by evolving small Python functions.

    AlphaEvolve took that concept and “injected it with steroids”. I mean this for various reasons…

    First, because thanks to Gemini’s huge token window, AlphaEvolve can grapple with entire codebases, hundreds of lines long, not just tiny functions as in the early tests like FunSearch. Second, because like other LLMs, Gemini has seen thousands and thousands of code in tens of programming languages; hence it has covered a wider variety of tasksand it became a kind of polyglot programmer.

    Note that with smarter LLMs as engines, AlphaEvolve can itself evolve to become faster and more efficient in its search for solutions and optimal programs.

    AlphaEvolve’s Mind-Blowing Results on Real-World Problems

    Here are the most interesting applications presented in the white paper:

    Optimizing efficiency at Google’s data centers: AlphaEvolve discovered a new scheduling heuristic that squeezed out a 0.7% saving in Google’s computing resources. This may look small, but Google’s scale this means a substantial ecological and monetary cut!

    Designing better AI chips: AlphaEvolve could simplify some of the complex circuits within Google’s TPUs, specifically for the matrix multiplication operations that are the lifeblood of modern AI. This improves calculation speeds and again contributes to lower ecological and economical costs.

    Faster AI training: AlphaEvolve even turned its optimization gaze inward, by accelerating a matrix multiplication library used in training the very Gemini models that power it! This means a slight but sizable reduction in AI training times and again lower ecological and economical costs!

    Numerical methods: In a kind of validation test, AlphaEvolve was set loose on over 50 notoriously tricky open problems in mathematics. In around 75% of them, it independently rediscovered the best-known human solutions!

    Towards Self-Improving AI?

    One of the most profound implications of tools like AlphaEvolve is the “virtuous cycle” by which AI could improve AI models themselves. Moreover, more efficient models and hardware make AlphaEvolve itself more powerful, enabling it to discover even deeper optimizations. That’s a feedback loop that could dramatically accelerate AI progress, and lead who knows where. This is somehow using AI to make AI better, faster, and smarter – a genuine step on the path towards more powerful and perhaps general artificial intelligence.

    Leaving aside this reflection, which quickly gets close to the realm of science function, the point is that for a vast class of problems in science, engineering, and computation, AlphaEvolve could represent a paradigm shift. As a computational chemist and biologist, I myself use tools based in LLMs and reasoning AI systems to assist my work, write and debug programs, test them, analyze data more rapidly, and more. With what Deepmind has presented now, it becomes even clearer that we approach a future where AI doesn’t just execute human instructions but becomes a creative partner in discovery and innovation.

    Already for some months we have been moving from AI that completes our code to AI that creates it almost entirely, and tools like AlphaFold will push us to times where AI just sits to crack problems withus, writing and evolving code to get to optimal and possibly entirely unexpected solutions. No doubt that the next few years are going to be wild.

    References and Related Reads

    Deepmind’s blog post and white paper on AlphaEvolve

    A Google Colab notebook with the mathematical discoveries of AlphaEvolve outlined in Section 3 of the paper!

    Powerful Data Analysis and Plotting via Natural Language Requests by Giving LLMs Access to Functions

    New DeepMind Work Unveils Supreme Prompt Seeds for Language Models

    www.lucianoabriata.com I write about everything that lies in my broad sphere of interests: nature, science, technology, programming, etc. Subscribe to get my new stories by email. To consult about small jobs check my services page here. You can contact me here. You can tip me here.

    The post Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer appeared first on Towards Data Science.
    #googles #alphaevolve #isevolvingnew #algorithms #could
    Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
    AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Picture created by the author using various tools including Dall-E3 via ChatGPT. Large Language Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect. Errors, bugs and hallucinations happen all the time, and it might even happen that the code runs well but… it’s not doing exactly what we wanted. Now, imagine an AI that doesn’t just write code based on what it’s seen, but actively evolves it. To a first surprise, this means you increase the chances of getting the right code written; however, it goes far beyond: Google showed that it can also use such AI methodology to discover new algorithms that are faster, more efficient, and sometimes, entirely new. I’m talking about AlphaEvolve, the recent bombshell from Google DeepMind. Let me say it again: it isn’t just another code generator, but rather a system that generates and evolves code, allowing it to discover new algorithms. Powered by Google’s formidable Gemini models, AlphaEvolve could revolutionize how we approach coding, mathematics, algorithm design, and why not data analysis itself. How Does AlphaEvolve ‘Evolve’ Code? Think of it like natural selection, but for software. That is, think about Genetic Algorithms, which have existed in data science, numerical methods and computational mathematics for decades. Briefly, instead of starting from scratch every time, AlphaEvolve takes an initial piece of code – possibly a “skeleton” provided by a human, with specific areas marked for improvement – and then runs on it an iterative process of refinement. Let me summarize here the procedure detailed in Deepmind’s white paper: Intelligent prompting: AlphaEvolve is “smart” enough to craft its own prompts for the underlying Gemini Llm. These prompts instruct Gemini to act like a world-class expert in a specific domain, armed with context from previous attempts, including the points that seemed to have worked correctly and those that are clear failures. This is where those massive context windows of models like Geminicome into play. Creative mutation: The LLM then generates a diverse pool of “candidate” solutions – variations and mutations of the original code, exploring different approaches to solve the given problem. This parallels very closely the inner working of regular genetic algorithms. Survival of the fittest: Again like in genetic algorithms, but candidate solutions are automatically compiled, run, and rigorously evaluated against predefined metrics. Breeding of the top programs: The best-performing solutions are selected and become the “parents” for a next generation, just like in genetic algorithms. The successful traits of the parent programs are fed back into the prompting mechanism. Repeat: This cycle – generate, test, select, learn – repeats, and with each iteration, AlphaEvolve explores the vast search space of possible programs thus gradually homing in on solutions that are better and better, while purging those that fail. The longer you let it run, the more sophisticated and optimized the solutions can become. Building on Previous Attempts AlphaEvolve is the successor to earlier Google projects like AlphaCodeand, more directly, of FunSearch. FunSearch was a fascinating proof of concept that showed how LLMs could discover new mathematical insights by evolving small Python functions. AlphaEvolve took that concept and “injected it with steroids”. I mean this for various reasons… First, because thanks to Gemini’s huge token window, AlphaEvolve can grapple with entire codebases, hundreds of lines long, not just tiny functions as in the early tests like FunSearch. Second, because like other LLMs, Gemini has seen thousands and thousands of code in tens of programming languages; hence it has covered a wider variety of tasksand it became a kind of polyglot programmer. Note that with smarter LLMs as engines, AlphaEvolve can itself evolve to become faster and more efficient in its search for solutions and optimal programs. AlphaEvolve’s Mind-Blowing Results on Real-World Problems Here are the most interesting applications presented in the white paper: Optimizing efficiency at Google’s data centers: AlphaEvolve discovered a new scheduling heuristic that squeezed out a 0.7% saving in Google’s computing resources. This may look small, but Google’s scale this means a substantial ecological and monetary cut! Designing better AI chips: AlphaEvolve could simplify some of the complex circuits within Google’s TPUs, specifically for the matrix multiplication operations that are the lifeblood of modern AI. This improves calculation speeds and again contributes to lower ecological and economical costs. Faster AI training: AlphaEvolve even turned its optimization gaze inward, by accelerating a matrix multiplication library used in training the very Gemini models that power it! This means a slight but sizable reduction in AI training times and again lower ecological and economical costs! Numerical methods: In a kind of validation test, AlphaEvolve was set loose on over 50 notoriously tricky open problems in mathematics. In around 75% of them, it independently rediscovered the best-known human solutions! Towards Self-Improving AI? One of the most profound implications of tools like AlphaEvolve is the “virtuous cycle” by which AI could improve AI models themselves. Moreover, more efficient models and hardware make AlphaEvolve itself more powerful, enabling it to discover even deeper optimizations. That’s a feedback loop that could dramatically accelerate AI progress, and lead who knows where. This is somehow using AI to make AI better, faster, and smarter – a genuine step on the path towards more powerful and perhaps general artificial intelligence. Leaving aside this reflection, which quickly gets close to the realm of science function, the point is that for a vast class of problems in science, engineering, and computation, AlphaEvolve could represent a paradigm shift. As a computational chemist and biologist, I myself use tools based in LLMs and reasoning AI systems to assist my work, write and debug programs, test them, analyze data more rapidly, and more. With what Deepmind has presented now, it becomes even clearer that we approach a future where AI doesn’t just execute human instructions but becomes a creative partner in discovery and innovation. Already for some months we have been moving from AI that completes our code to AI that creates it almost entirely, and tools like AlphaFold will push us to times where AI just sits to crack problems withus, writing and evolving code to get to optimal and possibly entirely unexpected solutions. No doubt that the next few years are going to be wild. References and Related Reads Deepmind’s blog post and white paper on AlphaEvolve A Google Colab notebook with the mathematical discoveries of AlphaEvolve outlined in Section 3 of the paper! Powerful Data Analysis and Plotting via Natural Language Requests by Giving LLMs Access to Functions New DeepMind Work Unveils Supreme Prompt Seeds for Language Models www.lucianoabriata.com I write about everything that lies in my broad sphere of interests: nature, science, technology, programming, etc. Subscribe to get my new stories by email. To consult about small jobs check my services page here. You can contact me here. You can tip me here. The post Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer appeared first on Towards Data Science. #googles #alphaevolve #isevolvingnew #algorithms #could
    TOWARDSDATASCIENCE.COM
    Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
    AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Picture created by the author using various tools including Dall-E3 via ChatGPT. Large Language Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect. Errors, bugs and hallucinations happen all the time, and it might even happen that the code runs well but… it’s not doing exactly what we wanted. Now, imagine an AI that doesn’t just write code based on what it’s seen, but actively evolves it. To a first surprise, this means you increase the chances of getting the right code written; however, it goes far beyond: Google showed that it can also use such AI methodology to discover new algorithms that are faster, more efficient, and sometimes, entirely new. I’m talking about AlphaEvolve, the recent bombshell from Google DeepMind. Let me say it again: it isn’t just another code generator, but rather a system that generates and evolves code, allowing it to discover new algorithms. Powered by Google’s formidable Gemini models (that I intend to cover soon, because I’m amazed at their power!), AlphaEvolve could revolutionize how we approach coding, mathematics, algorithm design, and why not data analysis itself. How Does AlphaEvolve ‘Evolve’ Code? Think of it like natural selection, but for software. That is, think about Genetic Algorithms, which have existed in data science, numerical methods and computational mathematics for decades. Briefly, instead of starting from scratch every time, AlphaEvolve takes an initial piece of code – possibly a “skeleton” provided by a human, with specific areas marked for improvement – and then runs on it an iterative process of refinement. Let me summarize here the procedure detailed in Deepmind’s white paper: Intelligent prompting: AlphaEvolve is “smart” enough to craft its own prompts for the underlying Gemini Llm. These prompts instruct Gemini to act like a world-class expert in a specific domain, armed with context from previous attempts, including the points that seemed to have worked correctly and those that are clear failures. This is where those massive context windows of models like Gemini (even you can run up to a million tokens at Google’s AI studio) come into play. Creative mutation: The LLM then generates a diverse pool of “candidate” solutions – variations and mutations of the original code, exploring different approaches to solve the given problem. This parallels very closely the inner working of regular genetic algorithms. Survival of the fittest: Again like in genetic algorithms, but candidate solutions are automatically compiled, run, and rigorously evaluated against predefined metrics. Breeding of the top programs: The best-performing solutions are selected and become the “parents” for a next generation, just like in genetic algorithms. The successful traits of the parent programs are fed back into the prompting mechanism. Repeat (to evolve): This cycle – generate, test, select, learn – repeats, and with each iteration, AlphaEvolve explores the vast search space of possible programs thus gradually homing in on solutions that are better and better, while purging those that fail. The longer you let it run (what the researchers call “test-time compute”), the more sophisticated and optimized the solutions can become. Building on Previous Attempts AlphaEvolve is the successor to earlier Google projects like AlphaCode (which tackled competitive Programming) and, more directly, of FunSearch. FunSearch was a fascinating proof of concept that showed how LLMs could discover new mathematical insights by evolving small Python functions. AlphaEvolve took that concept and “injected it with steroids”. I mean this for various reasons… First, because thanks to Gemini’s huge token window, AlphaEvolve can grapple with entire codebases, hundreds of lines long, not just tiny functions as in the early tests like FunSearch. Second, because like other LLMs, Gemini has seen thousands and thousands of code in tens of programming languages; hence it has covered a wider variety of tasks (as typically different languages are used more in some domains than others) and it became a kind of polyglot programmer. Note that with smarter LLMs as engines, AlphaEvolve can itself evolve to become faster and more efficient in its search for solutions and optimal programs. AlphaEvolve’s Mind-Blowing Results on Real-World Problems Here are the most interesting applications presented in the white paper: Optimizing efficiency at Google’s data centers: AlphaEvolve discovered a new scheduling heuristic that squeezed out a 0.7% saving in Google’s computing resources. This may look small, but Google’s scale this means a substantial ecological and monetary cut! Designing better AI chips: AlphaEvolve could simplify some of the complex circuits within Google’s TPUs, specifically for the matrix multiplication operations that are the lifeblood of modern AI. This improves calculation speeds and again contributes to lower ecological and economical costs. Faster AI training: AlphaEvolve even turned its optimization gaze inward, by accelerating a matrix multiplication library used in training the very Gemini models that power it! This means a slight but sizable reduction in AI training times and again lower ecological and economical costs! Numerical methods: In a kind of validation test, AlphaEvolve was set loose on over 50 notoriously tricky open problems in mathematics. In around 75% of them, it independently rediscovered the best-known human solutions! Towards Self-Improving AI? One of the most profound implications of tools like AlphaEvolve is the “virtuous cycle” by which AI could improve AI models themselves. Moreover, more efficient models and hardware make AlphaEvolve itself more powerful, enabling it to discover even deeper optimizations. That’s a feedback loop that could dramatically accelerate AI progress, and lead who knows where. This is somehow using AI to make AI better, faster, and smarter – a genuine step on the path towards more powerful and perhaps general artificial intelligence. Leaving aside this reflection, which quickly gets close to the realm of science function, the point is that for a vast class of problems in science, engineering, and computation, AlphaEvolve could represent a paradigm shift. As a computational chemist and biologist, I myself use tools based in LLMs and reasoning AI systems to assist my work, write and debug programs, test them, analyze data more rapidly, and more. With what Deepmind has presented now, it becomes even clearer that we approach a future where AI doesn’t just execute human instructions but becomes a creative partner in discovery and innovation. Already for some months we have been moving from AI that completes our code to AI that creates it almost entirely, and tools like AlphaFold will push us to times where AI just sits to crack problems with (or for!) us, writing and evolving code to get to optimal and possibly entirely unexpected solutions. No doubt that the next few years are going to be wild. References and Related Reads Deepmind’s blog post and white paper on AlphaEvolve A Google Colab notebook with the mathematical discoveries of AlphaEvolve outlined in Section 3 of the paper! Powerful Data Analysis and Plotting via Natural Language Requests by Giving LLMs Access to Functions New DeepMind Work Unveils Supreme Prompt Seeds for Language Models www.lucianoabriata.com I write about everything that lies in my broad sphere of interests: nature, science, technology, programming, etc. Subscribe to get my new stories by email. To consult about small jobs check my services page here. You can contact me here. You can tip me here. The post Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer appeared first on Towards Data Science.
    0 Комментарии 0 Поделились
Расширенные страницы