May 14, 2025
Human Friend Digital Podcast
Detecting AI in a World Flooded With Fakes

In this episode of Human Friend Digital, Jacob and Jeff tackle the growing challenge of detecting AI-generated content—images, text, and everything in between.
Starting with the viral trend of AI-generated Studio Ghibli-style portraits, they explore the ethical gray areas around using AI to imitate human art. From there, the conversation widens: how AI-generated content is becoming harder to spot, and why tools like Hive Moderation are now necessary to catch subtle digital fingerprints left behind by AI models.
Jacob and Jeff also dive into the bigger picture—the responsibility of government regulation, the dangers of deepfakes in politics, and how AI’s role should be reshaped to solve real-world problems instead of replacing creativity. Their message is clear: AI isn’t going away, but we can—and should—fight to keep truth and trust alive.
Their advice for now? Equip yourself with AI detection tools. Learn to question what you see. And maybe, just maybe, don’t encourage the machine by giving it your clicks.

LINKS:
Hive: https://hivemoderation.com/
Grammarly: https://www.grammarly.com
NPR Article on the Human Cell Atlas: https://www.npr.org/2025/04/18/g-s1-60986/how-the-human-cell-atlas-is-fast-tracking-new-medicines
View Transcript
[This transcript has been edited for clarity]
Jacob:
Hello Jeff. Welcome to another edition of the Human Friend Digital Podcast.
Jeff:
Hey Jacob. Today we are talking about AI. And one thing that happened recently online with the AI trend was people posting a lot of photos of themselves in the style of Miyazaki. Hi—Hayao Miyazaki? Is that how you—
Jacob:
Don’t ask me to pronounce his first name.
Jeff:
Okay. Yeah. Miyazaki, who is the brains behind Studio Ghibli. And a lot of people are posting themselves using AI-generated versions of themselves in the style of his studio. So we wanted to talk today about the ethics behind that and how you could be a responsible AI user, and the implications that are involved here.
Jacob:
Yes. And I think another thing too is we want to talk about—and I want to bring up—is how do you know when something is made by AI? Because it’s getting really, really good and it’s getting harder and harder to detect. I would say a year ago, it was super easy to detect. If there was a human in it for some reason, they would have the wrong number of fingers or they would have some part of their body melding into furniture or things that just didn’t look right. And it could not do text at all.
Jeff:
It couldn’t do it. It was like—so you know that thing where it’s like, if you ever want to know if you’re dreaming, see if you can read a clock? Because in your dreams, clocks won’t render correctly—and “render” is the wrong word—but like—
Jacob:
No, the concept’s the same.
Jeff:
Yeah. You’ll look at a clock and be like, that’s not a real clock—I must be dreaming.
Jacob:
That’s a good one right there. Yeah. Or reading a book. I’ve tried that.
Jeff:
Yeah, yeah, yeah. Or reading a sign or whatever.
Jacob:
The pages and the letters just won’t compute properly. It’s very much the same hierarchy of thought and process that the AI is creating here in these dreamscapes of things. But now it can create pretty good knockoffs of about anything. Really good photos of people.
Jeff:
Very high fidelity.
Jacob:
Yeah. And now you can actually give it real photos and ask it to make an alternate version of it. If you give it the real photo to start from, it can make a really good fake photo based on the real photo. You can make these weird alternate reality moments, which is a little scary because you’re looking at it like—what do I believe?
Jeff:
Absolutely.
Jacob:
How do I detect this? So I do want to talk about how we can basically be prepared, and if you do want to check stuff—how do you check it?
Let’s dive in. Hit me with the first question for the episode and let’s go from there.
Jeff:
It’s more of how you think about things versus a question. This whole trend online of people stylizing their photos for Instagram or Facebook or whatever based on Studio Ghibli’s artwork—how do you feel about that as an ethical consideration? Should you do that? It’s not for commercial purposes, right? It’s just people having fun.
Jacob:
If it’s not for commercial purposes, that does have some novelty to it. It’s hard for me to be high and mighty over here saying don’t use it. But I do think it does need to come with some parameters around it. I think it needs to come with: You have to own up that this was created using X; It should not be able to be used for commercial purposes, but that is a really gray area because there’s no regulation around it. And just Miyazaki—and any artist of that caliber—their work getting co-opted like this is really gross. They’ve worked a lifetime to craft this form. And you, before this episode, had this great quote from him.
Jeff:
Well, he said recently—or at least it was reported recently—that AI-generated artwork was “an insult to life itself” and that he was “utterly disgusted by this trend”.
Jacob:
I think that basically sums it up very brutally. It is kind of an insult to life itself. AI, I think, has some really great benefits for meaningless work. Because we have a lot of meaningless work in reality.
Jeff:
We have talked, you and I have talked again and again and again, and I’ll go back to a quote that I might have said on the podcast before, but it was “the whole point of AI should be so that it does my laundry and my dishes so that I can create art and write. Instead, it’s being used to create art and write, so then all I have left to do is my laundry and my dishes.”
Jacob:
Yeah, that’s—see, that’s—I agree. I agree wholeheartedly with that effort or that thought. And I do think that I wish AI would take a much harder bent towards what I would categorize as meaningless work.
Jeff:
Yes. Yeah. Like, the drudgery of life. Like, the stupid silly work, the data entry stuff.
Jacob:
Mass data entry, sifting through that stuff, reviewing basic edits on content, helping with that. But also I think it should be, you know, AI should mostly be focused on helping real-world situations and then creating new taxable things for people. Like, I don’t think anyone should ever have to work at McDonald’s.
Jeff:
Okay.
Jacob:
I think that’s a demeaning job for people, and I’m sure—
Jeff:
Okay. That’s interesting.
Jacob:
But I do think that there are certain types of work that is ethically just bad. And that would be like coal miners. I believe that those people should never have to go into a deep, dark cave and almost die for minerals. I don’t think fry cook should have to exist on a mass speed scale. I do think that if you’re going to cook food, you should be cooking it well.
Jeff:
Sure, sure. There’s a difference between fast food and like restaurant, or like being a chef and being a cook. They are different things.
Jacob:
Exactly. And I think that AI should stay out of the creativity portion of cooking. But if it’s just basically printing hamburgers—that feels like that’s—and that’s where I feel like all AI should be going: to replace the jobs that are harmful to people or the jobs that are ethically unsound.
I don’t care, but this is what I ethically feel, I feel like.
Jeff:
You can feel however you want. I’m just saying this is a hot take.
Jacob:
Then if they do put AI in there, it would be great if the government just came in and said, great, you can use automation. We’re going to create an automation tax. And then automation tax goes directly to support a universal basic income, Andrew Yang style back in the day.
Jeff:
I don’t like Andrew Yang, but I do like the idea of universal basic income.
Jacob:
I think we’re gonna need it because I think there’s too many jobs that AI could do really well, that are really—
Jeff:
And it’s going to do really well. And so then what do we do? Yeah. After AI, it is inevitable in a capitalist society, it’s inevitable that we’re gonna use AI to replace jobs that can be done by AI. And then what do you do with all of that? ‘Cause it’s creating productivity. So what do you do with all that productivity? You know, is this just free for companies? No. I agree with you, there should be—
Jacob:
There should tax it.
Jeff:
—AI tax. Yeah.
Jacob:
Every time that AI can come and take away a job, there should be a tax upon AI’s use for that, and that should go into a universal basic income fund.
Jeff:
I agree with that. I think that’s smart.
Jacob:
And then I think AI should be exclusively used on jobs that are extremely dangerous for humans. Like, not exclusively, but primarily used. Creating, getting rid of graphic designers on Fiverr who are already racing to the bottom for cheap labor around the world and supplanting those people—what the hell? We just don’t need that. There was already a cheap graphic design labor market, which has its own ethical problems.
We don’t need to supplant these people that are just struggling to get by. And art is flooding the zone anyways with the many people doing it, and there’s people that do a great job with it. AI, like you said, should be out of the creativity space and into the space of solving large-scale problems that require huge data sets. Like, basically medical bioscience. I mean, AI and medical bioscience have added—it’s amazing.
Jeff:
I was just gonna say—Right. Yeah. The AI is able to solve these protein… so you know, you know, I studied biochemistry and it was just like back in the day—and by back in the day, I mean like 10 years ago—it was always such a huge computational problem, looking at a gene sequence and then trying to figure out what protein it would create, or like what shape the protein was. And AI is really good at looking at that gene sequence and being like, this is what you’ll get from it. And it’s really accurate, and it’s really good at comparing data sets, whether that’s from sample populations or whatever, to be like, oh, this is your risk factors for X, Y, Z. Or if you exhibit these symptoms, then you know, you have cancer or whatever, whatever, whatever.
Jacob:
Mass level of understanding really complex shapes, geometries, organic compounds and stuff like that. You could see probably some of that is actually being used to create this artwork where it’s understanding patterns and then being able to replicate patterns—
Jeff:
Oh, sure. Yeah, yeah, yeah.
Jacob:
I don’t want it to do… I don’t need it.
Jeff:
I don’t need it to make Miyazaki.
Jacob:
I need it to cure cancer.
Jeff:
Yeah.
Jacob:
And there is some really promising articles recently that I’ll see if I can find a link to later. It was an NPR article talking to a project leader at Genentech—Genentech, Genentech, something like that.
I forget the name, how to pronounce it. But it is a—they’re, I mean, not to put words in their mouth, but it does sound like 15 years from now they’re going to have individualized, highly individualized AI-assisted healthcare where they will be able to make cancer vaccines for people.
Jeff:
Recently they’ve kind of come around to the—like, your breast cancer and my breast cancer aren’t the same breast cancer. And so figuring out what those differences are will help them be treated more effectively. Because like now we’re doing a shotgun approach, right? Chemotherapy, radiation—that’s a shotgun approach. And now they’ll be able to do gene therapy stuff or immunotherapy stuff that’s very targeted to your specific body and, I guess, your specific cancer type. And AI will be really helpful in parsing that out, I guess.
Jacob:
We’re going to have the Star Wars robot doctors with the tubes, and then you’re going to go into the bacta tank—’cause that’s what it’s called, a bacta tank—and then you’re going to get the little thing and you’ll be floating around, and then you’re going to come out and you’re going to be perfect.
Jeff:
Uba. Uba. What was the—was that what they said when Padme was giving birth?
Jacob:
Oh yeah. Yeah. My God, that robot was—
Jeff:
Sticks in my brain. It’s like “uba.” That’s a very soothing sound. Thank you, Uba doctor.
Jacob:
Where we want AI to be—not this. So let’s talk about, let’s hit your next couple questions.
Jeff:
So yeah. What I want to talk about mostly is, who do you think is responsible for regulating this, right? Because it’s all in the hands of these companies. And even though OpenAI started out as a nonprofit, now they’re trying to get for-profit status. And they said they would be really—you know, open AI, like they’d be open—and now they’re like, actually, it’s proprietary. So like, what do you think the role of government is? Or do you think government should take a role?
Jacob:
Yeah, I think they should take a really simple role. I think they should pass laws that are almost a little vague but highly enforceable on this, where they should have every image that is on any website platform, they should create a—they should partner with something like, I was doing some research for this episode, Hive Moderation would be a good one—partner with a company like this that uses AI to detect AI, and basically every single image on the internet that is through a website as a requirement. Maybe you should just do a threshold that’s if you have a website over 5,000 visitors a month—something that most small businesses could barely scratch—
Jeff:
Sure. But if you’re like the Facebooks or the New York Times or the whatever of the world.
Jacob:
Yeah, yeah. And maybe 5,000 is a little small. Maybe like 10,000 would be a good threshold of average monthly visitors. You should be required to have this government-funded plugin, a variety of choice, that you put on your website, and it literally labels every single AI thing for you so you—
Jeff:
You know how it finds the AI through AI? Or is it one of those other mysteries where it’s like, we don’t know how the AI works altogether, and we don’t know how the AI finds AI—we just know that it does?
Jacob:
Well, essentially, I believe it kind of reverse-engineers the picture creation, but that might be a little bit of a dumb explanation for it, I think.
Jeff:
Yeah. It might be above our expertise and our ability to explain, and it’s not really on the scope of this podcast.
Jacob:
I was involved in an AI company—one of the before-ChatGPT-and-all-this-stuff-took-off. About five, six years ago I did have the opportunity to work with an AI pioneer organization that did not make it, but they did some really cool things. And their whole idea was, let’s say you have a really expensive photo shoot, but you need more models to create more variance in diversity, right?
Jeff:
Okay.
Jacob:
Which is really controversial—which was probably why they didn’t make it—because people didn’t know what to do. If you’re like, well, I’m glad I hired the white model, but now I want him to be a Black model. Oh, this model is too Black, I would like him to be Mexican.
Jeff:
Oh no. Yeah, they shouldn’t have made it. This is like Blackface AI.
Jacob:
Yeah, it was really, really awkward with some of the stuff, but it was so interesting I couldn’t look away. So I did talk with these guys because I wanted to understand how it would work, and clients were asking—I won’t say which major organizations in the U.S. were asking for features like this.
Jeff:
Lips are sealed.
Jacob:
I have an NDA, so I can’t say. But I will say it was larger than a local company. It was looking for potential in this avenue and ways to save money. Because one of the most expensive things on photo shoots is full buyouts for people’s faces. Sometimes the model can get paid more than the photographer.
Anyways, during that time period, our retouchers were zooming in at like 3000% and assessing the AI-generated images of these faces. And through that process, we did talk to them about how they—it sweeps through line by line, kind of like a printer, but it also creates things that are based on relative pixels around a thing. So it almost has like a fractal-like generation that’s sourced out from it.
Now, this is again like five or six years ago, so I don’t know if they use the similar technology. But when you get in really, really far on it, the way that the pixels actually are rendered is really odd.
If you look at a photo—you take like a Canon photo, right—and you take a picture of it and you go really, really far, you will see a very clear… I feel silly, I don’t remember what the name of this is called, but it’s the thing that takes the photo. It’s like this light sensor. There are really sophisticated light sensors in Canons. You can see kind of how it’s capturing light if you go in really far, and you can kind of go, oh okay. It’s a very uniform grid pattern.
AI is not that way.
Jeff:
It’s got like a—yeah, like it’s got aberrations that can be predicted for, accounted for.
Jacob:
Yes, yes. So you can take a tool—and Hive has one that’s pretty good. When I was doing some testing on it—let me pull it up and we’ll go through it a little bit. I know the listener cannot follow along, but I’m taking an AI-generated image and putting it into the system—
Jeff:
It was our joke podcast cover for this episode where we had AI do us as Studio Ghibli in space.
Jacob:
Right, which we’re going to put inside of a screenshot maybe on the episode to show the results of this other tool detecting it.
Jeff:
Sure, sure, sure.
Jacob:
But essentially, you can use AI to kind of zoom in real deep on a photo and then take all the information of it. And each model has its own—
Jeff:
It says, “This input is likely to contain AI-generated or deepfake content. 99.9%.”
Jacob:
Yeah. And not only can it detect the deep underlying generation patterns that are in it on like, you know, the thousands-percent zoom, it can tell that I generated this with ChatGPT-4.0. It will know the exact model that it’s in, and I think—
Jeff:
Cool. Oh, I like that.
Jacob:
And I think this really needs to be the answer. So we’ll send a link to anybody. So if you are feeling like you don’t know if an image is AI, you can download it and then upload it to this tool and see if it is—if someone has created it and you wanted to—
Jeff:
And I also think, just piggybacking off of this—like, why can’t AI companies be required to watermark any of their creations?
Jacob:
That would be another good regulation.
Jeff:
Yeah. If someone uses an image, it’ll say “created by ChatGPT” or whatever across it.
Jacob:
Well, I do think that there’s this new—there’s a moral, there’s a gray area in art of modification. So like Andy Warhol’s a great one. He made a giant painting of a Campbell’s soup can.
Jeff:
Right.
Jacob:
So he didn’t design the Campbell’s soup can or the label.
Jeff:
I guess that’s true.
Jacob:
But then he made a really awesome, cool painting of the Campbell’s soup can. But he also has other things with mixed media art. You know, he’s just the most notable famous one. But a lot of artists have done this, where they take different pieces of stuff, they put them together to create new creative art. That new creative art comes with its own copyright. So if you use AI in that capacity, you don’t want a watermark on it. And if you do make something new with it, you—
Jeff:
Yeah, that’s a very—it’s an interesting side of the question that I had not considered.
Jacob:
Yes. This is where I think it comes down not to the AI tools watermarking their work. That’s why I think it needs to be—because if you want to use it between you and your buddy on a group chat to make a really funny photo, and it is cute—kind of like an emoji, almost like a homemade emoji system—whatever.
But if you’re going to post it online as like a cover story to go along with something, this is where I think government agencies and social media outlets need to be required to simply have a tool on their website that can run through all the images and watermark them live and say “AI Gen” in the corner, and is required to have that clearly labeled for a user to know this was AI-generated. And then people won’t have the question.
And I think by simply putting that on there and requiring it, it will drastically slam people’s usage of AI in any nefarious way, because then they know they’re going to get tagged. And then it’s like, ah, what’s the point of this?
Jeff:
And then it’s like, you’re never going to have your weird uncle being like, “That was a deepfake,” when it was real. Or being like, “Look at this thing that’s real,” and you’re like, “That’s a deepfake, weird uncle.” I actually don’t have any weird uncles, so I’m lucky, but I know that that’s what people say.
Jacob:
No, no, it’s a real thing, and I think we should require AI to solve AI problems. I think that’s the most creative way to actually solve it—is to use the power it brings to solve its own issues.
Jeff:
A little recursive, but I suppose when you don’t really know how an AI model works, maybe the only thing that can govern it is another AI model.
Jacob:
Yeah, I know. I think that’s fair. I think it’s totally fair. It’s a little scary.
Jeff:
I don’t know how I feel about it.
Jacob:
But I would say that what I would like to point out too is that this is not only an image conversation, this is also a text conversation.
Jeff:
Right. Okay, so let’s talk about that.
Jacob:
So, there’s some really easy tools. I like to use Grammarly for this. I’m a Grammarly user, but I don’t get any affiliate link. If you go over there and sign up for Grammarly, I don’t see any money.
Jeff:
We don’t get any money from them.
Jacob:
No, but I would like to say that they have a great AI detection tool baked in. So you can copy and paste content in there. If you are a copywriter and you are writing content for clients—which I often have a hand in doing—it’s kind of fun to put it in there and try to mix your content up to get it less AI-sounding. And it will let you know where things are feeling very AI.
AI does a couple of things depending on the model that are really quirky that you might be able to detect on your own. One thing that I’ve noticed is a lot of times AI likes to use the word “ensure”—E-N-S-U-R-E. I don’t know why. It really likes to use it a lot. It keeps forcing it. I’ve had a couple of content models that I’ve created for clients where I had to specifically go in there and say, stop using the word “ensure” so much. And there’s a couple of things like that, or like a double dash, a long dash.
Jeff:
Yeah, an em dash.
Jacob:
An em dash.
Jeff:
Yeah, it loves—oh yeah, it does love using em dashes, which I like, but like, you know, every sentence—like, come on.
So what you’re saying is you could use Grammarly… okay, you shouldn’t do this, kids—not that any kids listen to our podcast—but if you had to write an essay for high school and you did it with ChatGPT, you could then feed it through Grammarly and then figure out how to make it not sound like AI. So then it wouldn’t get flagged by your teachers. Because I’m sure schools all have their own—or are exploring ways—to detect AI generation on their own or like employing things like Grammarly. I don’t know.
Jacob:
No, no. There’s a whole—there’s a whole—even Grammarly offers stuff specifically for educational institutions to do this work, detect stuff for it en masse.
Jeff:
It’s getting very incestuous, but you know, whatever. It’s the world we live in.
Jacob:
But you have a great point. I mean, if you were going to try to pass the test, you could have AI generate it, you could put it in Grammarly, and then you could spend a lot of time massaging the text in order to get it to not be wholly detected as AI.
Jeff:
Just write your own essays, kids.
Jacob:
It does seem like that would be the easier path.
Jeff:
How many years did we write essays? Like freshman year of high school—
Jacob:
Yeah, we graduated college without AI.
Jeff:
We graduated without AI. You should too.
Jacob:
Yeah.
Jeff:
Do you know how many term papers I wrote—and I wrote them, some of them in German—like, grow up.
Jacob:
I did—yeah, yeah. I don’t think I did any in a foreign language, but I definitely wrote quite a few.
But I think what AI can do is it can help people with the feedback process. I do think one thing that AI is really good at—if you are doing it from a copywriting perspective—is to write what you want to write, give it to ChatGPT, and say, please provide feedback in an ordered—
Jeff:
Yes, bouncing ideas. It’s very good at.
Jacob:
And then it could basically be like another college professor talking to you about, “well, you could try this, this sentence structure’s a little weird, you could do that”. And then as long as you’re not copying and pasting it over, I think—
Jeff:
No, but it gives you—yeah, it is helpful in terms of writing process. That’s what we use it for with our clients mostly. We will give it something, it’ll give us back something, and we’ll be like, hmm, I like this bit, I don’t like that bit, oh, that was a good point. And then we feed it back something else, and then it comes back with more feedback, and so blah blah blah blah blah, and then we can give it to our client.
Jacob:
Google is trying to crack down on this too. So—
Jeff:
Oh, really?
Jacob:
Well, I think—uh, well—
Jeff:
Like in terms of SEO stuff or—
Jacob:
In SEO rankings because people are flooding the zone with essentially junk content that is there. And basically, they have their own AI detection tool, so they’ll be able to detect AI. They have a very targeted mindset about this because they also offer a tool that generates AI content—Gemini—and they don’t want you to not use their tools, but they also don’t want you to game the system. So they’re trying to find a very narrow approach to this, where they are leaning on more in general. There will be like a threshold in their detection. But it’s not clear what—Google basically tells people broad strokes about what’s in their algorithm but doesn’t give them the actual nuts and—
Jeff:
Right, right, right. It’s proprietary.
Jacob:
They don’t—yeah, they keep it close to their chest, but they do—
Jeff:
They don’t want anyone else making that similar algorithm. Theirs is pretty good.
Jacob:
Oh yeah. Their algorithm is unbelievable. It’s their best product by far.
Jeff:
Yes. Their only good product that they’ve ever created.
Jacob:
So anyways, let’s get back to the AI—
Jeff:
So, the only—we covered a lot of what I wanted to talk about. The only thing I really want—I don’t want to get super political or specifically political. I’ll just pose it this way: What do you think the biggest dangers are here for us as a political society?
Jacob:
I think it’s the valuing of AI as a creative tool. I think it’s going to create a weird gap in skills for people that lean into it for creative needs when a lot of things—like Studio Ghibli, back to that—were hard-earned skills. I mean, that man, Miyazaki—
Jeff:
Yeah. His beautiful, beautiful art and he’s been doing it for decades.
Jacob:
A lifetime of craft. And then now you can just poop it out. Okay, that’s just bad. So I think that’s just a society.
Politically—I think that we’re going to see people, mostly nefarious people, just create fake things all the time. Fake stories. They can make an AI-generated fake story that looks real with fake quotes and then post it somewhere and then get—
Jeff:
Yeah, I’ve seen people use ChatGPT to do that in lawsuits or in court, where it’s just like they’ll cite things in their documents that they submit to court, and the things that they’re citing—like previous court cases or something—
Jacob:
That are just fake record. Okay, there’s a perfect example.
And then we will have AI image generation. So we will put political figures in situations that they don’t necessarily be in—
Jeff:
Yeah, which we’ve had for a while as deepfakes. But I think it’s—those used to be pretty difficult to get realistic, and now you could have Barack Obama or whatever say anything you want, and it could fool a lot of people.
Jacob:
Yeah. And then there’s that Sora, so the ChatGPT’s video model, and there’s quite a few other video models out there, are getting pretty remarkable on some smaller scale. So again, ChatGPT two years ago couldn’t make a hand to save its life. It either had eight fingers, two fingers, or it was deformed in some way.
Jeff:
Do you remember the videos of—maybe you didn’t, ’cause you’re not on social media—I’m on social media, you’re not. But there was this thing for a while. It was like Will Smith eating spaghetti.
Jacob:
I think I remember hearing about that, yeah.
Jeff:
It could not handle eating spaghetti. I just thought it was so funny. But now I bet it probably could.
Jacob:
Oh yeah. The video—and it was trying to do it—and his face was like melting.
Jeff:
So funny.
Jacob:
It was just demented. But I think you should take a look at that progress and, well, let’s just project it out for years: An AI deepfake is going to look so much like reality, it’s disturbing.
Jeff:
Yeah. Sometimes it already does look so much like reality, it’s disturbing.
Jacob:
And now it can write, it can do hands, it can do that. I’ve had it—one of the tests that you did, when you get the new GPT model and you get it on there—it’s like, make me a step-by-step poster guide for how to wash your hands during cooking. It gave me logical steps to follow that made sense. It made a great graphic design that was in a comic book style of like the 1960s, ’70s. And it did everything—it nailed it. There was a couple of things in there that were like, man, that’s a little weird, but whatever. But, you know, so it’s only gonna get really good.
So I think that from a deepfake perspective and a government perspective, it’s going to create a terrible opportunity for disinformation to flourish like nobody’s business, where people won’t trust the truth as much as they won’t trust it. It’s not that I don’t think they’re gonna trust deepfakes. I think that deepfakes will water down trust in general.
Jeff:
Yeah, even more than it already has been watered down.
Jacob:
Yeah. That’s why I think—not that the current administration has made any mention of talking about this topic at all, which is concerning.
Jeff:
I’ve not heard any, but that doesn’t mean they haven’t.
Jacob:
Right, right. I mean, well, yeah, it’s hard to—
Jeff:
I pay attention to the news, but I try to keep myself sane at the same time.
Jacob:
I pay pretty close attention to the news and I—well, I don’t want to get into it too much—but essentially, I don’t think the current administration cares that much about this topic at this time.
Jeff:
Very little indication that they do, yeah.
Jacob:
Yeah. But if somebody does care about it, the best thing that they can do is require that every organization on the internet—period—and phones themselves… It would be great if they made it on the phone manufacturers themselves and the devices that have AI-powered tools on them, have to have an AI-powered integration system that can actively, constantly detect and put on tags to know: this is AI, this isn’t when you’re browsing—at least browsing the internet—or in most major applications that have a certain threshold of users on a regular basis.
And the only problem with this is every time the government has ever made a regulation—like the CAN-SPAM Act, or even this website accessibility thing that they’ve been trying to get better at, the WCAG and stuff like that—is that the FCC, which is the organization that’s supposed to enforce many of these things, or—
Jeff:
Yes. Federal Communications Commission.
Jacob:
Yeah. They—how should I say it—paper tiger? What’s the phrase?
Jeff:
I thought it was paper dragon. But yeah, maybe it is paper tiger.
Jacob:
Maybe there could be both. But anyways, it looks scary, but there’s nothing to it. They don’t really go out and enforce it. And when they do go out and enforce it, they enforce it in unequal measure randomly. That’s one of the biggest problems too—that if they did make it, they would actually have to make sure people use it.
But I think, for you as an everyday user—and if anyone listening to this—is to get a couple of links bookmarked in your browser to help you detect AI. Some of these tools also come with browser extensions that you can add to your device. That will help you with AI detection, and you can pull it up and ask it to take a look at it for you.
Jeff:
If you’re ever concerned about something you’re seeing where it’s like, “hmm, is this real?” Then you’ll have that right there at your fingertips.
Jacob:
Link to it. There’s a couple of Chrome extensions out there—
Jeff:
Again, we’re not plugging anybody, but you know, there are stuff.
So Jacob, any final thoughts for us on this week’s episode?
Jacob:
I’m gonna put my head in the sand and hope it all goes away. No, I’m just kidding. I think that the best thing that anybody can do at this age is get extensions on your browser and tools in your pocket to just detect this stuff as it’s coming at you. And then if you have noticed that a certain resource that you’re using frequently uses AI-generated content and imagery, you can know to trust that a little bit less. They’re not really doing the due diligence. And then the other one is—don’t upvote. Don’t like. Don’t help AI.
Jeff:
Yeah. Don’t promote it in the algorithm if you can help it.
Jacob:
Just ignore it. It’s kind of like Queen Mab and Merlin. Everyone knows what I’m talking about. I’m sure there’s one kid from the nineties—late nineties—who knows exactly what I’m talking about, so I’m just going to leave it at that. Look up Sam Neill’s Merlin and the last scene in how they defeat Queen Mab. That’s how we have to do this. That’s how we have to do a lot of things in society. And that’s what I’m going to leave it on.
Jeff:
That is a very cryptic final thought ’cause I bet no one knows what you’re talking about.
All right. Well, thanks for listening, guys. We’ll see you in a couple weeks.
Jacob:
All right. See ya. Bye.
Jeff:
All right. Bye.
Almost never miss an episode!
Well, we're only human.
Subscribe to receive emails in your inbox when every new episode drops ... or when we want to send you obnoxious emails to sell you stuff you don't really need.
Just kidding, we respect the privilege of being in your inbox.
Email Subscribe
"*" indicates required fields