Where Are the Meaningful AI Guardrails?

Joseph Dana
4 min readApr 12
Employees at the OpenAI office in San Francisco. Wall Street Journal

Italy has become the first Western country to block the popular artificial intelligence bot ChatGPT. Italian authorities didn’t block the AI software because the technology is advancing too quickly and becoming too powerful. Instead, the Italian data protection authority blocked the application over privacy concerns and questions over ChatGPT’s compliance with the European Union’s General Data Protection Regulation. The event made international headlines and tapped into deeper global fears that AI is getting too powerful.

Many of us can’t comprehend how the technology has developed so quickly. One reason is that there have been few guardrails from a regulatory standpoint to keep tabs on the growth of AI. Humanity needs guardrails in place, but that is much easier said than done.

The regulation of AI is becoming increasingly vital as the technology is being used more widely in areas such as health care, finance, and transportation. According to a study by researchers at the University of Pennsylvania and OpenAI, the privately-held company behind ChatGPT, most jobs will be changed significantly by AI in the near future. For nearly 20 percent of jobs in the study ranging from accountant to writer, at least half of their tasks could be completed much faster with ChatGPT and similar tools. While we don’t know what this will do to the labor market, it will have an unavoidably large impact that could have knock-on effects across society.

There has been a slight movement towards better regulation worldwide in the last decade. The European Union, in line with its data protection standards, has developed a framework for AI regulation that includes rules for high-risk AI applications, requirements for transparency and accountability, and a ban on specific uses of AI, such as social scoring — the practice of using the technology to rank people for their trustworthiness. The United States has also slowly started to regulate AI, with the National Institute of Standards and Technology developing a set of principles for AI governance and the Federal Trade Commission taking enforcement actions against companies that use AI in ways that violate consumer protection laws.

Yet, these regulations are inadequate, given the speed at which AI develops. The concern over AI’s growing power and the lack of…

Joseph Dana