The Dark Side of Generative AI: How it Impacts Photos and Videos

Generative AI is a type of artificial intelligence that can generate content, like images, videos, or text. Chances are you might be familiar with text-based examples such as ChatGPT and photo examples such as DALL-ESora, or Stable Diffusion. Generative AI models are trained on large datasets and can create new content by learning patterns and structures from the data they are trained on. As you may already realize, AI-generated content is often difficult to discern without sophisticated analysis.

Generative AI created car wreck

While generative AI has many potential positive applications, such as generating fun art, creating virtual worlds, helping write articles and assisting in creative design, there are also negative impacts.

Some of these negative impacts include:

  1. Misinformation and Fake Content
    Generative AI can be used to create realistic but fake photos and videos that can be used to spread misinformation or deceive people. Deepfakes, which are generated using AI, can manipulate images or videos to make it appear as if someone said or did something they did not, leading to potential harm or damage to a person’s reputation. Unfortunately, the threat does not only apply to individuals. To get a sense of the potential harm to businesses, imagine fraud caused by an insurance claim utilizing fake photos.
  2. Privacy Concerns
    Generative AI can create content that invades people’s privacy by generating images or videos that reveal personal information or depict individuals in compromising situations. This can result in loss of privacy, harassment, or even blackmail, sometimes putting the burden on the individual to prove that the content is fake.
  3. Bias and Discrimination
    Generative AI models learn from large datasets, which may contain biases present in the data. As a result, generative AI content can also carry those biases, leading to the creation of images or videos that perpetuate stereotypes, discrimination, or prejudice against certain groups of people. Without systems to audit the source and nature of the training data, it becomes nearly impossible to assume fairness.
  4. Infringement Concerns
    Generative AI raises ethical concerns related to ownership and consent. For example, using generative AI to create content without proper attribution or permission from the original creators may result in copyright infringement or intellectual property violations. Rest assured, blaming AI is unlikely to work as an excuse for plagiarism.
  5. Emotional Impact and Confusion
    Generative AI-generated content, such as deepfakes, can have psychological and emotional impacts on individuals who may be deceived by the content or have their emotions manipulated. This can result in mistrust, anxiety, and emotional distress. We live in a world where people accept photos and video content at face value. Once people are trained otherwise to always suspect the veracity of content, the deeper implication is they may also begin to raise doubts about legitimate content.
  6. Legal and Security Issues
    The use of generative AI in creating fake content can have legal and security implications. For example, it can lead to legal disputes, defamation cases, or security threats when used for malicious purposes, such as spreading misinformation, fraud, or cyber-attacks. Imagine the chaos or fraud that might ensue from malicious phone calls generated in the voice of a high-authority figure in both corporate and political contexts.

It is important to be aware of the potential negative impacts of generative AI on photos and videos and to develop appropriate safeguards, guidelines, and regulations to mitigate these risks and ensure responsible and ethical use of this technology.

While many AI creators and corporations have proposed pausing AI development to prevent the potential harms from spiraling out of control, it may be too late – the truth is malicious actors would never embrace such a truce.

Instead, the best defense is having a system that can discern what’s real from what’s fake and at a more subtle level, what is authentic and what has been generated. With that sort of system in place, it becomes much easier to focus on the positive aspects of generative AI without the threats and incidentally, also reflects our vision at Attestiv.

(this article was updated December 2024)

Picture of Nicos Vekiarides

Nicos Vekiarides

Recent News

About Us

Attestiv provides authenticity and validation for digital photos, videos and documents using patented tamper-proofing blockchain technology and AI analysis. 

Deepfakes and Claims Automation

Deepfakes: An Insurance Industry Threat

Sign up for our Newsletter
Nicos Vekiarides

Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & co-founder of Attestiv. He has spent the past 20+ years in enterprise IT and cloud, as a CEO & entrepreneur, bringing innovative new technologies to market. His previous startup, TwinStrata, an innovative cloud storage company where he pioneered cloud-integrated storage for the enterprise, was acquired by EMC in 2014. Before that, he brought to market the industry’s first storage virtualization appliance for StorageApps, a company later acquired by HP.

Nicos holds 6 technology patents in storage, networking and cloud technology and has published numerous articles on new technologies. Nicos is a partner at Mentors Fund, an early-stage venture fund, a mentor at Founder Institute Boston, where he coaches first-time entrepreneurs, and an advisor to several companies. Nicos holds degrees from MIT and Carnegie Mellon University.

Mark Morley

Mark Morley is the Chief Operating Officer of Attestiv.

He received his formative Data Integrity training at Deloitte. Served as the CFO of Iomega (NYSE), the international manufacturer of Zip storage devices, at the time,  the second fastest-growing public company in the U.S.. He served as the CFO of Encore Computer (NASDAQ) as it grew from Revenue of $2 million to over $200 million. During “Desert Storm”, Mark was required to hold the highest U.S. and NATO clearances.

Mark authored a seminal article on Data Integrity online (Wall Street Journal Online). Additionally, he served as EVP, General Counsel and CFO at Digital Guardian, a high-growth cybersecurity company.

Earlier in his career, he worked at an independent insurance agency, Amica as a claims representative, and was the CEO of the captive insurance subsidiary of a NYSE company.

He obtained Bachelor (Economics) and Doctor of Law degrees from Boston College and is a graduate of Harvard Business School.