Now Reading
Was this written by a human or AI? ¯_(ツ)_/¯

Was this written by a human or AI? ¯_(ツ)_/¯

2023-03-22 05:47:38

Image this. You’re looking on OkCupid once you come throughout two profiles that catch your eye. The pictures are real and enticing to you, and the pursuits described match your personal. Their private blurbs even point out outings that you simply take pleasure in: 

two samples of very similar sounding dating app profiles

All issues being equal, how would possibly you resolve between the 2 profiles? Undecided? Properly, what when you had been informed that one of many blurbs – Particular person 1 – was generated by AI? 

AI-generated textual content is more and more making its means into our day by day lives. Auto-complete in emails and ChatGPT-generated content material have gotten mainstream, leaving people weak to deception and misinformation. Even in contexts the place we count on to be conversing with one other human – like on-line courting – the usage of AI-generated textual content is rising. A survey from McAfee signifies that 31% of adults plan to or are already utilizing AI of their courting profiles. 

What are the implications and dangers of utilizing AI-generated textual content, particularly in on-line courting, hospitality, {and professional} conditions, areas the place the way in which we characterize ourselves is critically vital to how we’re perceived? 

“Do I need to rent this particular person? Do I need to date this particular person? Do I need to keep on this particular person’s dwelling? These are issues which can be deeply private and that we do fairly commonly,” says Jeff Hancock, professor of communication at Stanford Faculty of Humanities and Sciences, founding director of Stanford’s Social Media Lab, and a Stanford Institute for Human-Centered AI school member. Hancock and his collaborators got down to discover this drawback area by taking a look at how profitable we’re at differentiating between human and AI-generated textual content on OKCupid, AirBNB, and 

What Hancock and his team learned was eye-opening: individuals within the research may solely distinguish between human or AI textual content with 50-52% accuracy, about the identical random likelihood as a coin flip. 

The true concern, in accordance with Hancock, is that we will create AI “that comes throughout as extra human than human, as a result of we will optimize the AI’s language to benefit from the form of assumptions that people have. That’s worrisome as a result of it creates a danger that these machines can pose as extra human than us,”with a possible to deceive. 

Low Accuracy, Excessive Settlement

An knowledgeable within the discipline of deception detection, Hancock needed to make use of his information in that space to deal with AI-generated textual content. “One factor we already knew is that individuals are usually dangerous at detecting deception as a result of we’re trust-default. For this analysis, we had been curious, what occurs after we take this concept of deception detection and apply it to generative-AI, to see if there are parallels with different deception and belief literature?”  

Learn the complete research, Human Heuristics for AI-generated Language are Flawed


After presenting individuals with textual content samples throughout the three social media platforms, Hancock and his collaborators found that though we aren’t profitable at distinguishing between AI and human-generated textual content, we aren’t solely random both. 

All of us depend on related – albeit equally incorrect – assumptions primarily based on cheap instinct and shared language cues. In different phrases, we frequently get the improper reply — human or AI —  however for causes we agree on. 

For instance, excessive grammatical correctness and the usage of first-person pronouns had been typically attributed, incorrectly, to human-generated textual content. Referencing household life and utilizing casual, conversational language had been additionally, incorrectly, attributed to human-generated textual content. “We had been stunned by the core perception that my collaborator Maurice Jakesch had, that we had been all counting on the identical heuristics that had been flawed in the identical methods.” 

Hancock and his group additionally noticed no significant distinction in accuracy charges by platform, that means individuals had been equally flawed at evaluating textual content passages in on-line courting, skilled, and hospitality contexts. 

See Also

Threat For Misinformation

Due to these flawed heuristics, and since it’s changing into cheaper and simpler to provide content material, Hancock believes we’ll see extra misinformation sooner or later. “The quantity of AI-generated content material may overtake human-generated content material on the order of years, and that would actually disrupt our info ecosystem. When that occurs, the trust-default is undermined, and it could possibly lower belief in one another.”

So how can we grow to be higher at differentiating between AI and human-generated textual content? “All of us must be concerned within the answer right here,” says Hancock. 

One concept the group proposed is giving AI a recognizable accent. “Whenever you go to England, you’ll be able to form of inform the place individuals are from, and even within the U.S., you’ll be able to say if somebody is from the East Coast, LA, or the Midwest. It doesn’t require any cognitive effort — you simply know.” The accent may even be merged with a extra technical answer, like AI watermarking. Hancock additionally means that in high-stakes eventualities the place authentication is efficacious, self-disclosing machines would possibly grow to be the norm. 

Even so, Hancock acknowledges that we’re years behind in educating younger individuals concerning the dangers of social media. “That is simply one other factor they’re going to want to study, and to this point are receiving zero training and coaching on it.”

Regardless of the present murkiness, Hancock stays optimistic. “Seeing the bold glamour filter [on TikTok] yesterday was just a little bit stunning. However we frequently concentrate on the tech, and we ignore the truth that we people are those utilizing it and adapting to it. And I’m hopeful that we will give you methods to regulate how that is carried out, create norms and even maybe insurance policies and legal guidelines that constrain the destructive makes use of of it.”

Stanford HAI’s mission is to advance AI analysis, training, coverage and observe to enhance the human situation. Learn more

Source Link

What's Your Reaction?
In Love
Not Sure
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top