Advertisement

Addressing AI Privacy Risks in Light of Minnesota’s Proposed Legislation

JH45-stock.Adobe.com

Artificial intelligence is a powerful technology, and in recent years, several use cases have emerged — both positive and negative. Those negative use cases, such as the creation of AI-generated deepfake imagery, have become a focus of the public’s scrutiny and the government’s regulation, and rightfully so. These harmful uses of artificial intelligence not only pose a threat to the public but also to the positive growth of AI technology.

Recently, Minnesota lawmakers introduced a bill that would restrict the use of artificial intelligence technology to create misleading pornographic images of people. Under this law, businesses offering services that involve providing deepfakes would have to end their ability to generate pornographic material that includes real people if operating in the state of Minnesota. This is a step further than previous laws that banned the dissemination of non-consensual sexually explicit deepfakes, which also contained exemptions for satire, parody or commentary — provisions that are now excluded from the new law.

It is important to note that, in Minnesota, this bill has become a bipartisan issue that brought together state Senators from both sides of the aisle. The cooperation lawmakers have shown when it comes to the issue of deepfakes is evidence that this issue is incredibly urgent and demands a timely resolution to protect the greater good. The dissemination of sexually explicit deepfake images can cause significant consequences for the victim’s reputation, livelihood and mental health, which is why the wrongdoers who are abusing artificial intelligence technology in this way must be stopped.

Although the spread of misinformation — even in photographic form — is nothing new, artificial intelligence technology poses such a threat because it has made deepfake images “better” and more convincing. These days, it is becoming more and more difficult to tell the difference between reality and fabrication, meaning that deepfakes are arguably more damaging than ever.

Advertisement

Curbing the Misuse of Artificial Intelligence Technology

AI technology is like any powerful innovation; if there is a way that the technology can be abused, wrongdoers will find a way to do so. We cannot let the misdeeds of a few bad eggs interfere with the ability of innovators to develop AI technology that will help the greater good.

Therefore, the goal is to mitigate the risks of AI technology abuse while encouraging its responsible development. However, achieving this delicate balance can, admittedly, be quite tricky, especially when it comes to a technology that is as novel and innovative as AI.

AI-generated deepfakes are a legitimate problem that must be solved before the maximum potential of artificial intelligence as a transformative force of good can be realized. And reputational damage is, unfortunately, only one of the ways in which deepfake images can hurt their victims. For example, some wrongdoers have used deepfakes for purposes like blackmail, impersonation, the spread of misinformation and identity theft — all of which not only have consequences for the victim but also for others and the general public.

That being said, this merely scratches the surface of the privacy violations that AI technology poses. Deepfake images are an indicator of the broader context of data privacy concerns surrounding the proliferation of artificial intelligence, and the misuse of AI technology erodes the public’s trust in the information they see online. Meanwhile, the extensive data collection of artificial intelligence systems introduces new vulnerabilities that can be exploited and lead to further risks.

Collaboration is the Key to Effective AI Regulation

These issues are precisely why there is a need for a legislative framework surrounding artificial intelligence. Although the wrongdoers in these situations are those who abuse and misuse AI, the creators of this technology still have a reasonable responsibility to limit the functionalities of tools that can be used for harm, such as deepfake tools’ ability to generate pornographic content of real people — which is what lawmakers in Minnesota have set out to do.

Ultimately, the change we need to see to establish a safer future for artificial intelligence is a collaboration between AI developers, users and lawmakers to create an ethical framework for the technology’s positive use. Because AI is still so new, many lawmakers (understandably) do not fully understand how it should be regulated. Nevertheless, the creators of these tools must be held accountable for ensuring this mighty technology does not fall into the wrong hands.

The battle over deepfakes is an important battleground over the future of AI. By creating a legislative framework that penalizes and prohibits harmful use cases of the technology, lawmakers can create a landscape where positive uses of AI are allowed to thrive, setting an encouraging precedent for other instances and applications of artificial intelligence in the future.


Ed Watal is the Founder and Principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Va. He regularly serves as a board advisor to the world’s largest financial institutions. One of his key projects includes BigParser (an Ethical AI Platform and A Data Commons for the World). He has also built and sold several tech and AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals, called ‘Cloud Basics.’ Watal has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford.

Feature Your Byline

Submit an Executive ViewPoints.

Featured Event

View the Retail Trendcaster Webinar Series on-demand to uncover key 2025 retail trends, from AI and personalization to social commerce. Gain expert insights, data-driven predictions, and actionable takeaways to stay ahead in a rapidly evolving market.

Advertisement

Retail Trendcaster Webinar Series
Days
Hours
Minutes
Seconds

Uncovering What’s Next in Retail

On-Demand Limited Video Series

Q1 is a pivotal time for retail, with experts analyzing holiday sales and forecasting trends. View the full lineup of the Retail Trendcaster video series for insights on consumer spending, AI, personalization, social commerce, and more—helping you focus on what truly matters in 2025.

Brought to you by
Retail TouchPoints
Access Now
Retail TouchPoints is a brand of Emerald X LLC. By clicking the button and submitting information, you acknowledge and agree that your information may be shared with corporate affiliates of Emerald X LLC, and other organizations such as event hosts, speakers, sponsors, and partners. Please read our Privacy Policy and our Terms Of Use for more information on our policies.

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: