The stakes are high on both sides of the Atlantic, and in practice the impact is as vastly different as deciding on prison sentences and picking who to hire.
The EU's Artificial Intelligence Act (AIA), which was approved by the Council on December 6 last year and will be considered by the European Parliament as early as March this year, will monitor AI applications, products and services based on a risk rating system. The higher the risk, the stricter the regulations. If passed, the EU's AIA would be the world's first AI regulation bill that transversally and across all sectors and applications.
In contrast, the United States does not have a federal law specifically regulating the use of artificial intelligence, but relies on existing laws, blueprints, architectures, standards and regulations that can bridge to guide the ethical aspects of AI use. However, while businesses and governments can have structures to follow, and these architectures are voluntary, when AI is used to confront consumers, it does not provide protection for consumers who are harmed.
After adding remedial measures for federal action, local and state governments enacted laws to address AI bias in employment, such as New York City and California as a whole, as well as the insurance industry in Colorado. No proposed or enacted local laws have appeared in the news media to address the use of AI in prisons or prison sentencing. However, according to the New York Times, in 2016, Eric Loomis, a Wisconsin man, sued the state for a six-year prison sentence related to artificial intelligence software, but won the case. Loomis argued that his legitimate rights to the program were violated because he could not check or question the software's algorithms.
"What I'm trying to make the case is that we still need to rely on the federal government," Haniyeh Mahmoudian, global AI ethicist at DataRobot, told EE Times. "Almost every American has a right to privacy matters, and that's something the federal government should deal with."

Elham Tabassi, Chief of Staff at NIST Information Technology Laboratory (Source: NIST Information Technology Laboratory)
The latest national standard was published by the National Institute of Standards and Technology (NIST) on January 27 this year.
NIST's autonomous architecture is designed to help U.S.-based organizations manage AI risks that could affect individuals, organizations, and society in the United States. The architecture incorporates trustworthiness considerations such as explainability and mitigation of harmful biases into AI products, services and systems.
"In the short term, what we want to do is foster trust," said Elham Tabassi, chief of staff at NIST's Information Technology Laboratory. "We do this by understanding and managing the risks of AI systems so that we can help uphold civil liberties and rights, enhance security and 'at the same time' provide opportunities for innovation and creativity."
Taking a long-term view, "we're talking about architecture that allows AI teams, whether they're primarily the people who design, develop, or arrange AI, to think about AI in terms of taking into account the risks and impacts," said Reva Schwartz, a research scientist at NIST's Information Technology Lab.

Reva Schwartz, Research Scientist, NIST Information Technology Laboratory (Source: NIST Information Technology Laboratory)
The NIST architecture follows the release of the White House under U.S. President Joe Biden last October's "Blueprint for the Artificial Intelligence Bill of Rights." Five principles are proposed to regulate the ethics of the use of AI:
- The system should be safe and effective;
- There should be no distinction between algorithms and systems;
- People should be protected from misuse of their data and have control over how their data is used;
- Automation systems should be transparent;
- Choosing not to use AI systems and employing human intervention should be an option.
Biden's approach to light regulation seems to be a direct follow-up to the light regulation approach favored by its predecessor.
Don't wait for legislation
Danny Tobey, a partner at law firm DLA Piper, told EE Times that there is no AI law in the U.S. because the technology is changing too quickly for lawmakers to pin it long enough to draft legislation.

Danny Tobey, partner at law firm DLA Piper (Source: DLA Piper)
"Everybody is coming up with architecture, but very few are coming up with actual rules around which you can plan," Tobey mentions. "We promised an AI Bill of Rights, but what we got was a 'blueprint for an AI Bill of Rights' that had no legal effect."
Tobey believes that global regulatory recommendations revolve around third-party deliberation and impact assessments to test the safety, non-differentiation and other key aspects of ethical AI in AI applications. He mentioned that these are tools that companies can already use.
"The solution is for companies to start testing AI technologies against these expected standards even before legislation is finalized to build future-proof, AI-compliant systems and anticipate future regulations," he said. â
In the EU, at least one company aligns with Tobey's thinking. NXP Semiconductor, based in the Netherlands, has developed its own new measures for AI ethics.
Does the U.S. need specific AI laws?
Does the United States already have laws to protect the public from the unethical use of AI?
In September 2022, SEC Chairman Gary Gensler addressed the issue at the MIT AI Policy Forum Summit to explore the challenges of deploying AI policy.
"Through our legislature, we protect the public through the law," he said. "These laws are safe, healthy, investor protection and promote financial stability. And these are still tried and tested public policies. â
Gensler also noted that instead of thinking we need a new law because we have a new tool â artificial intelligence â legislators and others should focus on how existing laws are applied.
The SEC referred to existing investor protection laws, and the corollary for the banking industry was the Equal Credit Opportunity Act; The Fair Housing Act protects people from discrimination when applying for a mortgage.
Is self-regulation the answer?
Diveplane's leaders want to provide AI-powered software developers for Biden's Blueprint, the EU's AIA, and more in the commercial and defense sectors.

Mike Capps, CEO of Diveplane (Source: Diveplane)
"This will help protect consumers," Michael Meehan, Diveplane's legal counsel and legal director, told EE Times. "People think it's at odds with what the company might want. But the truth is that most companies, including Diveplane, want to be guided. â
Meehan noted that the government has no "safe harbors" in AI laws or regulations to reduce risk to users.
A safe harbor clause is a provision in law that grants protection from punishment or liability if certain conditions are met. For example, if a properly implemented, instance-based AI loan approval system detects an error, the matter can be flagged as human review.
Mike Capps, CEO of Diveplane, also welcomes regulation, but he is also a proponent of self-regulation in the industry.
To illustrate, he cites the patient-privacy law in the United States. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) provides a safe harbor provision for users who remove identifying information from medical records, and unfortunately, cross-checking its identified database with another database can help identify those who should otherwise be anonymous.
This may not have been anticipated by 20th-century lawmakers. "If you set some hard and fast rules about how computers work today, you don't have the ability to adapt... Technology that didn't exist when you wrote it," Capps mentioned.
That idea led Diveplane to co-found the Data & Trust Alliance, a nonprofit consortium whose members "learn, develop, and adopt trusted data and AI practices," according to its website. Capps is a member of the alliance's leadership committee, which represents entities such as the NFL, CVS Health and IBM.
The organization is developing ethical standards for artificial intelligence. "These rules will continue to change and evolve because they need to." Capps said, "Will I write it into law?" No, I certainly won't, but I'll definitely use it as a case study of how to build a flexible system to minimize bias. â
Mahmoudian mentioned that the EU's AIA has re-examined the language assigned to the risk level of applications as new data emerges. She says this is important for cases like Instagram, which was once considered harmless but was proven to have a negative impact on teens' mental health a few years after it emerged.
(Refer to the original text:EU, U.S. Making Moves to Address Ethics in AI, by Ilene Wolff), compiled by Luffy Liu