91制片厂

Latest 91制片厂


This content is currently locked.

Your current 91制片厂 subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-703-340-1171 (International)

Comprehensive software reviews to make better IT decisions

Google and IBM Are Calling for AI Regulation

Last week, Google鈥檚 CEO, Sundar Pichai, . The next day, that can discriminate against consumers, citizens, and employees based on their gender, age, and ethnicity, among other characteristics.

Mr. Pichai wrote in an editorial for The Financial Times, 鈥淭here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,鈥 . 鈥淭he only question is how to approach it.鈥 (FT鈥檚 site was not accessible at the time of writing this note.)

He called for a cautious and nuanced approach, based on the technologies and sectors in which AI is used. In some, new rules are needed, e.g., autonomous vehicles. Others 鈥 financial services, insurance, healthcare 鈥 are already regulated, and the existing frameworks should be extended to cover AI-powered products and services.

鈥淐ompanies such as ours cannot just build promising new technology and let market forces decide how it will be used,鈥 wrote Mr. Pichai. 鈥淚t is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.鈥

The sentiment is echoed by IBM, which issued policy proposals in preparation for the AI panel hosted by its CEO, Ginni Rometty, at the World Economic Forum in Davos last week.

that companies work with governments to develop standards to avoid discrimination by AI systems, that they conduct assessments to determine risks and harms, and that they maintain documentation to be able to explain decisions that adversely impact individuals.

Our Take

As consumers become increasingly aware of the degree to which AI controls and shapes our lives and society and of the harms resulting from biased apps, the pressure is intensifying on technology firms and governments alike to put in place guardrails and, in some cases, put on the brakes to ensure that society can and work out what needs to be regulated and how.

Of particular concern are facial recognition technologies (FRTs), which are used by law enforcement agencies around the world to identify and track (potential) criminals and by governments for social surveillance and social engineering, including monitoring and persecuting ethnic minorities.

These technologies and the pervasive surveillance they create violate basic human rights, such as the right to privacy. (See Amnesty International鈥檚 report and our note Amnesty International Calls Google and Facebook a Threat to Human Rights.)

AI a while back to the ancient Roman god , who is depicted with two faces, one looking into the past and the other into the future. (He is the god of 鈥渂eginnings, gates, transitions, time, duality, doorways, passages, and endings.鈥) Janus, writes The Economist, 鈥渃ontained both beginnings and endings within him. That duality characterizes AI, too.鈥

There is 鈥済ood鈥 AI and 鈥渂ad鈥 AI, and then there is 鈥済ood鈥 AI with some bad mixed in 鈥 for example, biases. And then there is 鈥済ood鈥 technology which could lead to unanticipated and undesired, potentially horrific outcomes. Just like numerous other things in life and many technologies before AI. As a society, we need time to anticipate and sort this out, hopefully before we build the AI equivalent of the atomic bomb. And I don鈥檛 mean the singularity (i.e. hypothetical uncontrolled and irreversible technological growth).

And it is our responsibility as technology and business leaders to think through the consequences of using new technologies and the impact they may have on individuals, communities, society, and the world at large.


Want to Know More?

To get educated on AI biases and start eliminating them, see Info-Tech鈥檚 blueprint Mitigate Machine Bias.

To learn about AI guardrails and the controls we recommend you start putting in place even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.

Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171