Consumer rights group: Why a 10-year ban on AI regulation will harm Americans This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws..."> Consumer rights group: Why a 10-year ban on AI regulation will harm Americans This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws..." /> Consumer rights group: Why a 10-year ban on AI regulation will harm Americans This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws..." />

Upgrade to Pro

Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligencefor the next decade.

House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives.

This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced.

The nonprofit Center for Democracy & Technologyjoined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks.

Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations.

Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report.

These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use.

How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI.

The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chainand can ensure that core values like privacy and security are baked in.

Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place.

But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause.

It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems. 

Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies.

Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems.

This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies.

Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people.

While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules, what is being proposed is a blanket ban on state and local rules with no federal regulations in place. 

Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers.

Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium. 

First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos wasa real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet.

Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm. 

In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
#consumer #rights #group #why #10year
Consumer rights group: Why a 10-year ban on AI regulation will harm Americans
This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligencefor the next decade. House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives. This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced. The nonprofit Center for Democracy & Technologyjoined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks. Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations. Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI. The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chainand can ensure that core values like privacy and security are baked in. Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.  Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies. Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems. This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies. Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people. While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules, what is being proposed is a blanket ban on state and local rules with no federal regulations in place.  Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers. Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.  First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos wasa real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet. Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.  In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives. #consumer #rights #group #why #10year
WWW.COMPUTERWORLD.COM
Consumer rights group: Why a 10-year ban on AI regulation will harm Americans
This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligence (AI) for the next decade. House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives. This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced. The nonprofit Center for Democracy & Technology (CDT) joined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks. Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations. Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI. The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chain (developers, deployers, users) and can ensure that core values like privacy and security are baked in. Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.  Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies. Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems. This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies. Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people. While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules (an easy one would be regarding their own procurement of systems), what is being proposed is a blanket ban on state and local rules with no federal regulations in place.  Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers. Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.  First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos was (and, to be frank, still is) a real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet. Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.  In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
·112 Views