{"id":46788,"date":"2024-05-09T10:33:00","date_gmt":"2024-05-09T10:33:00","guid":{"rendered":"https:\/\/news.talkwithrattan.com\/index.php\/2024\/05\/09\/artificial-intelligence-is-making-it-hard-to-tell-truth-from-fiction\/"},"modified":"2024-05-09T10:33:01","modified_gmt":"2024-05-09T10:33:01","slug":"artificial-intelligence-is-making-it-hard-to-tell-truth-from-fiction","status":"publish","type":"post","link":"https:\/\/news.talkwithrattan.com\/index.php\/2024\/05\/09\/artificial-intelligence-is-making-it-hard-to-tell-truth-from-fiction\/","title":{"rendered":"Artificial intelligence is making it hard to tell truth from fiction"},"content":{"rendered":"<div style=\"text-align:center\"><img loading=\"lazy\" decoding=\"async\" width=\"677\" height=\"450\" src=\"https:\/\/i1.wp.com\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_language_models-677x450.jpg?resize=677,450&amp;ssl=1\" class=\"attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"Artificial intelligence is making it hard to tell truth from fiction\" title=\"Artificial intelligence is making it hard to tell truth from fiction\" \/><\/div><p> <br \/>\n<\/p>\n<div data-component=\"video-embed\">\n<p>Taylor Swift has scores of newsworthy achievements, from dozens of music awards to several world records. But last January, the mega-star made headlines for something much worse and completely outside her control. She was a target of online abuse.<\/p>\n<p>Someone had used <a href=\"https:\/\/www.snexplores.org\/article\/scientists-say-artificial-intelligence-definition-pronunciation\">artificial intelligence<\/a>, or AI, to create fake nude images of Swift. These pictures flooded social media. Her fans quickly responded with calls to #ProtectTaylorSwift. But many people still saw the fake pictures.<\/p>\n<p>That attack is just one example of the broad array of bogus media \u2014 including audio and visuals \u2014 that non-experts can now make easily with AI. Celebrities aren\u2019t the only victims of such heinous attacks. Last year, for example, male classmates spread fake sexual images of girls at a New Jersey high school.<\/p>\n<aside class=\"sn-conversion rich-text alignright\"\/>\n<p>AI-made pictures, audio clips or videos that masquerade as those of real people are known as deepfakes. This type of content has been used to put words in politicians\u2019 mouths. In January, robocalls <a href=\"https:\/\/thehill.com\/policy\/technology\/4442156-fcc-targets-ai-generated-robocalls-after-biden-primary-deepfake\/\" rel=\"noopener\">sent out a deepfake recording of President Joe Biden\u2019s voice<\/a>. It asked people not to vote in New Hampshire\u2019s primary election. And a deepfake video of Moldovan President Maia Sandu last December seemed to support a pro-Russian political party leader.<\/p>\n<p>AI has also produced false information about science and health. In late 2023, an Australian group fighting wind energy claimed there was research showing that newly proposed wind turbines could kill 400 whales a year. They pointed to a study seemingly published in <em>Marine Policy<\/em>. But an <a href=\"https:\/\/www.smh.com.au\/environment\/climate-change\/wind-farms-misinformation-peter-dutton-climate-change-20231101-p5egvn.html\" rel=\"noopener\">editor of that journal<\/a> said <a href=\"https:\/\/www.abc.net.au\/news\/2023-11-07\/editor-blasts-fake-study-linking-whale-deaths-to-wind-farms\/103069922\" rel=\"noopener\">the study didn\u2019t exist<\/a>. Apparently, someone used AI to mock up a fake article that falsely appeared to come from the journal.<\/p>\n<p>Many people have used AI to lie. But AI can also misinform by accident. One research team posed questions about voting to five AI models. The models <a href=\"https:\/\/www.ias.edu\/sites\/default\/files\/Angwin-Nelson-Palta_SeekingReliableElectionInformationDontTrustAI_2-27-24.pdf\" rel=\"noopener\">wrote answers that were often wrong and misleading<\/a>, the team shared in a 2023 report for AI Democracy Projects.<\/p>\n<p>Inaccurate information (misinformation) and outright lies (disinformation) have been around for years. But AI is making it easier, faster and cheaper to spread unreliable claims. And although <a href=\"https:\/\/www.snexplores.org\/article\/ai-deepfake-voice-scams-audio-tool\">some tools exist to spot or limit AI-generated fakes<\/a>, experts worry these efforts will become an arms race. AI tools will get better and better, and groups trying to stop fake news will struggle to keep up.<\/p>\n<p>The stakes are high. With a slew of more convincing fake files popping up across the internet, it\u2019s hard to know who and what to trust.<\/p>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<p><div class=\"youtube-embed\" data-video_id=\"HK6y8DAPN_0\"><iframe loading=\"lazy\" title=\"Introducing Sora \u2014 OpenAI\u2019s text-to-video model\" width=\"696\" height=\"392\" src=\"https:\/\/www.youtube.com\/embed\/HK6y8DAPN_0?feature=oembed&#038;enablejsapi=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<figcaption class=\"wp-element-caption\">This collection of video clips shows what can be made using artificial intelligence, just by typing in the description of an event \u2014 even fanciful ones, like pirate ships managing rough coffee seas in a kitchen mug. OpenAI posted this demo film before the release of its text-to-video model Sora.<\/figcaption><\/figure>\n<h4 class=\"wp-block-heading\">Churning out fakes<\/h4>\n<p>Making realistic fake photos, news stories and other content used to need a lot of time and skill. That was especially true for deepfake audio and video clips. But AI has come a long way in just the last year. Now almost anyone can use generative AI to fabricate texts, pictures, audio or video \u2014 sometimes within minutes.<\/p>\n<p>A group of healthcare researchers recently showed just how easy this can be. Using tools on OpenAI\u2019s Playground platform, two team members produced 102 blog articles in about an hour. The pieces contained more than 17,000 words of persuasive <a href=\"https:\/\/jamanetwork.com\/journals\/jamainternalmedicine\/article-abstract\/2811333\" rel=\"noopener\">false information about vaccines and vaping.<\/a><\/p>\n<p>\u201cIt was surprising to discover how easily we could create disinformation,\u201d says Ashley Hopkins. He\u2019s a clinical epidemiologist \u2014 or disease detective \u2014 at Flinders University in Adelaide, Australia. He and his colleagues shared these findings last November in <em>JAMA Internal Medicine<\/em>.<\/p>\n<p>People don\u2019t need to oversee every bit of AI content creation, either. Websites can churn out false or misleading \u201cnews\u201d stories with little or no oversight. Many of these sites tell you little about who\u2019s behind them, says McKenzie Sadeghi. She\u2019s an editor who focuses on AI and foreign influence at NewsGuard in Washington, D.C.<\/p>\n<p>By May 2023, Sadeghi\u2019s group had identified 49 such sites. Less than a year later, that number had skyrocketed to more than 750. Many have news-sounding names, such as Daily Time Update or iBusiness Day. But their \u201cnews\u201d may be made-up events.<\/p>\n<p>Generative AI models produce real-looking fakes in different ways. Text-writing models are generally designed to predict which words should follow others, explains Zain Sarwar. He\u2019s a graduate student studying computer science at the University of Chicago in Illinois. AI models learn how to do this using huge amounts of existing text.<\/p>\n<p>During training, the AI tries to predict which words will follow others. Then, it gets feedback on whether the words it picked are right. In this way, the AI learns to follow complex rules about grammar, word choice and more, Sarwar says. Those rules help the model write new material when humans ask for it.<\/p>\n<div class=\"wp-block-image  has-alignleft\">\n<figure class=\"alignleft size-large\"><figcaption class=\"wp-element-caption\"><span class=\"caption wp-caption-3139528\">Advances in generative AI\u2019s ability to produce natural-sounding language are part of what could make it especially effective for disinformation, experts warn.<\/span><span class=\"credit wp-credit-3139528\">Leon Neal\/Staff\/Getty Images News<\/span><\/figcaption><\/figure>\n<\/div>\n<p>AI models that make images work in a variety of ways. Some use a type of generative adversarial network, or GAN. The network contains two systems: a generator and a detective. The generator\u2019s task is to produce better and better realistic images. The detective then hunts for signs that something is wrong with these fake images.<\/p>\n<p>\u201cThese two models are trying to fight each other,\u201d Sarwar says. But at some point, an image from the generator will fool the detective. That believably real image becomes the model\u2019s output.<\/p>\n<p>Another common way to make AI images is with a diffusion model. \u201cIt\u2019s a forward and a backward procedure,\u201d Sarwar says. The first part of training takes an image and adds random noise, or interference. Think about fuzzy pixels on old TVs with bad reception, he says. The model then removes layers of random noise over and over. Finally, it gets a clear image close to the original. Training does this process many times with many images. The model can then use what it learned to create new images for users.<\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1030\" height=\"639\" decoding=\"async\" src=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics.png\" alt=\"an illustration of a woman's face straght on, and simulated metrics that an AI might use when training itself on human faces\" class=\"wp-image-3139530\" srcset=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics.png 1030w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics-617x383.png 617w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics-725x450.png 725w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics-300x186.png 300w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics-768x476.png 768w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disiniformation_biometrics-935x580.png 935w\" sizes=\"(max-width: 1030px) 100vw, 1030px\"\/><figcaption class=\"wp-element-caption\"><span class=\"caption wp-caption-3139530\">To make convincing imagery, AI systems first train by \u201cviewing\u201d thousands or millions of images of something \u2014 perhaps cats or faces or waves at sea. Later it can use what it learned to create new and compelling fake versions.<\/span><span class=\"credit wp-credit-3139530\">izusek\/E+\/Getty Images Plus.<\/span><\/figcaption><\/figure>\n<h4 class=\"wp-block-heading\">What\u2019s real? What\u2019s fake?<\/h4>\n<p>AI models have become so good at their jobs that many people won\u2019t recognize that the created content is fake.<\/p>\n<p>AI-made content \u201cis generally better than when humans create it,\u201d says Todd Helmus. He\u2019s a behavioral scientist with RAND Corporation in Washington, D.C. \u201cPlain and simple, it looks real.\u201d<\/p>\n<p>In one study, people tried to judge whether tweets (now X posts) came from an AI model or real humans. People <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adh1850\" rel=\"noopener\">believed more of the AI models\u2019 false posts<\/a> than false posts written by humans. People also were more likely to believe the AI models\u2019 true posts than true posts that had been written by humans.<\/p>\n<p>Federico Germani and his colleagues shared these results in <em>Science Advances<\/em> last June. Germani studies disinformation at the University of Zurich in Switzerland. \u201cThe AI models we have now are really, really good at mimicking human language,\u201d he says.<\/p>\n<p>What\u2019s more, AI models can now write with <a href=\"https:\/\/www.snexplores.org\/article\/studies-test-ways-slow-spread-fake-news\">emotional language<\/a>, much as people do. \u201cSo they kind of structure the information and the text in a way that is better at manipulating people,\u201d Germani says.<\/p>\n<p>People also have trouble telling fake images from real ones. A 2022 study in <em>Vision Research<\/em> showed that people could generally tell the difference between pictures of real faces and faces made with a GAN model from early 2019. But participants had trouble spotting realistic fake faces made by more advanced AI about a year later. In fact, people\u2019s later assessments were no better than guesses.<\/p>\n<p>This hints that people \u201coften perceived the realistic artificial faces to be <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0042698922000852?via%3Dihub\" rel=\"noopener\">more authentic than the actual real faces<\/a>,\u201d says Michoel Moshel. Newer models \u201cmay be able to generate even more realistic images than the ones we used in our study,\u201d he adds. He\u2019s a graduate student at Macquarie University in Sydney, Australia, who worked on the research. He studies brain factors that play a role in thinking and learning.<\/p>\n<p>Moshel\u2019s team observed <a href=\"https:\/\/www.snexplores.org\/article\/explainer-how-read-brain-activity\">brain activity<\/a> as people looked at images for the experiment. That activity differed when people looked at a picture of a real face versus an AI-made face. But the differences weren\u2019t the same for each type of AI model. More research is needed to find out why.<\/p>\n<section class=\"newsletter-signup__wrapper___lZ0W1 wp-block-house-ads wp-block-newsletter-signup\">\n<picture><source srcset=\"https:\/\/www.snexplores.org\/wp-content\/themes\/sciencenews-sns-child\/client\/src\/images\/cta-module@1x.png 1x,&#10;&#9;&#9;&#9;&#9;https:\/\/www.snexplores.org\/wp-content\/themes\/sciencenews-sns-child\/client\/src\/images\/cta-module@2x.png 2x\" media=\"(min-width: 768px)\"><source srcset=\"https:\/\/www.snexplores.org\/wp-content\/themes\/sciencenews-sns-child\/client\/src\/images\/cta-module-sm@1x.png 1x,&#10;&#9;&#9;&#9;&#9;https:\/\/www.snexplores.org\/wp-content\/themes\/sciencenews-sns-child\/client\/src\/images\/cta-module-sm@2x.png 2x\"><img decoding=\"async\" class=\"newsletter-signup__background___Eym8W\" src=\"https:\/\/www.snexplores.org\/wp-content\/themes\/sciencenews-sns-child\/client\/src\/images\/cta-module-sm@2x.png\" alt=\"\"\/><br \/>\n<\/source><\/source><\/picture>\n<div class=\"newsletter-signup__container___srNOL\" data-component=\"newsletter-signup\">\n<h3 class=\"newsletter-signup__heading___0EHmb\">\n\t\t\tEducators and Parents, Sign Up for The Cheat Sheet\t\t<\/h3>\n<div class=\"newsletter-signup__message___pemaq\">\n<p>Weekly updates to help you use <em>Science News Explores<\/em> in the learning environment<\/p>\n<\/p><\/div>\n<p class=\"newsletter-signup__thankyou___K6GGN\">Thank you for signing up!<\/p>\n<p class=\"newsletter-signup__error___hCsJI\">There was a problem signing you up.<\/p>\n<\/p><\/div>\n<\/section>\n<h4 class=\"wp-block-heading\">How can we know what\u2019s true anymore?<\/h4>\n<p>Photos and videos used to be proof that some event happened. But with AI deepfakes floating around, that\u2019s no longer true.<\/p>\n<p>\u201cI think the younger generation is going to learn not to just trust a photograph,\u201d says Carl Vondrick. He\u2019s a computer scientist at Columbia University in New York City. He spoke at a February 27 program there about the growing flood of AI content.<\/p>\n<p>That lack of trust opens the door for politicians and others to deny something happened \u2014 even when non-faked video or audio shows that it had. In late 2023, for example, U.S. presidential candidate Donald Trump claimed that political foes had used AI in an ad that made him look feeble. In fact,<em> Forbes<\/em> <a href=\"https:\/\/www.forbes.com\/sites\/mattnovak\/2023\/12\/04\/donald-trump-falsely-claims-attack-ad-used-ai-to-make-him-look-bad\/\" rel=\"noopener\">reported<\/a>, the ad appeared to show fumbles that really happened. Trump did not tell the truth.<\/p>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<p><div class=\"youtube-embed\" data-video_id=\"IOyrbsNcXt8\"><iframe loading=\"lazy\" title=\"Deepfakes: How to spot them | CBC Kids News\" width=\"696\" height=\"392\" src=\"https:\/\/www.youtube.com\/embed\/IOyrbsNcXt8?feature=oembed&#038;enablejsapi=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<figcaption class=\"wp-element-caption\">This segment is from CBC Kids News. It shows how easy it is to be fooled by deepfake video and audio recordings made using artificial intelligence. It also explains what deepfakes are and how critical thinking may help you avoid being tricked by them.\u00a0<\/figcaption><\/figure>\n<p>As deepfakes become more common, experts worry about the <em>liar\u2019s dividend<\/em>. \u201cThat dividend is that no information becomes trustworthy \u2014 [so] people don\u2019t trust anything at all,\u201d says Alondra Nelson. She\u2019s a sociologist at the Institute for Advanced Study in Princeton, N.J.<\/p>\n<p>The liar\u2019s dividend makes it hard to hold public officials or others accountable for what they say or do. \u201cAdd on top of that a fairly constant sense that everything could be a deception,\u201d Nelson says. That \u201cis a recipe for really eroding the relationship that we need between us as individuals \u2014 and as communities and as societies.\u201d<\/p>\n<p>Lack of trust will undercut society\u2019s sense of a shared reality, explains Ruth Mayo. She\u2019s a psychologist at the Hebrew University of Jerusalem in Israel. Her work focuses on how people think and reason in social settings. \u201cWhen we are in a distrust mindset,\u201d she says, \u201cwe simply don\u2019t believe anything \u2014 not even the truth.\u201d That can hurt people\u2019s ability to make well-informed decisions about elections, health, foreign affairs and more.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"675\" height=\"450\" decoding=\"async\" src=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-675x450.jpg\" alt=\"Danielle Coffee speaks at a U.S. Senate committee hearing\" class=\"wp-image-3139536\" srcset=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-675x450.jpg 675w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-574x383.jpg 574w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-279x186.jpg 279w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-768x512.jpg 768w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee-870x580.jpg 870w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_Danielle_Coffee.jpg 1030w\" sizes=\"(max-width: 675px) 100vw, 675px\"\/><figcaption class=\"wp-element-caption\"><span class=\"caption wp-caption-3139536\">\u201cWithout proper safeguards, we cannot rely on a common set of facts,\u201d says Danielle Coffee about AI. This head of News Media Alliance is seen at a U.S. Senate committee hearing in January 2024. It was held to discuss AI and its impact on democracy, elections, privacy, legal issues and news.<\/span><span class=\"credit wp-credit-3139536\">Kent Nishimura\/Stringer\/Getty Images News<\/span><\/figcaption><\/figure>\n<h4 class=\"wp-block-heading\">An arms race<\/h4>\n<p>Some AI models have been built with guardrails to <a href=\"https:\/\/openai.com\/blog\/how-openai-is-approaching-2024-worldwide-elections\" rel=\"noopener\">keep them from creating fake news, photos and videos<\/a>. Rules built into a model can tell it not to do certain tasks. For example, someone might ask a model to churn out notices that claim to come from a government agency. The model should then tell the user it won\u2019t do that.<\/p>\n<p>In a recent study, Germani and his colleagues found that <a href=\"https:\/\/arxiv.org\/abs\/2403.03550\" rel=\"noopener\">using polite language could speed up how quickly some models churn out disinformation<\/a>. Those models learned how to respond to people using human-to-human interactions during training. And people often respond more positively when others are polite. So it\u2019s likely that \u201cthe model has simply learned that statistically, it should operate this way,\u201d Germani says. Wrongdoers might use that to manipulate a model to produce disinformation.<\/p>\n<p>Researchers are working on ways to spot AI fakery. So far, though, there\u2019s no surefire fix.<\/p>\n<p>Sarwar was part of a team that tested several AI-detection tools. Each tool generally did a good job at spotting AI-made texts \u2014 if those texts were similar to what the tool had seen in training. The tools <a href=\"https:\/\/www.computer.org\/csdl\/proceedings-article\/sp\/2023\/933600a019\/1OXH0P6jEek\" rel=\"noopener\">did not perform as well<\/a> when researchers showed them texts that had been made with other AI models. The problem is that for any detection tool, \u201cyou cannot possibly train it on all possible texts,\u201d Sarwar explains.<\/p>\n<div class=\"wp-block-group cheat-sheet-cta is-layout-flow\">\n<div class=\"wp-block-group__inner-container\">\n<h2 class=\"wp-block-heading has-text-align-center\">Do you have a science question? We can help!<\/h2>\n<p class=\"has-text-align-center\"><a href=\"https:\/\/forms.gle\/YbhPosFTMqjbSNnV7\" target=\"_blank\" rel=\"noreferrer noopener\">Submit your question here<\/a>, and we might answer it an upcoming issue of\u00a0<em>Science News Explores<\/em><\/p>\n<\/div>\n<\/div>\n<p>One AI-spotting tool did work better than others. Besides the basic steps other programs used, this one analyzed the proper nouns in a text. Proper nouns are words that name specific people, places and things. AI models sometimes mix these words up in their writing, and this helped the tool to better home in on fakes, Sarwar says. His team shared their findings on this at an IEEE conference last year.<\/p>\n<p>But there are ways to get around those protections, said Germani at the University of Zurich.<\/p>\n<p>Digital \u201cwatermarks\u201d could also help verify real versus AI-made media. Some businesses already use logos or shading to label their photos or other materials. AI models could similarly insert labels into their outputs. That might be an obvious mark. Or it could be a subtle notation or a pattern in the computer code for text or an image. The label would then be a tip-off that AI had made these files.<\/p>\n<p>In practice, that means there could be many, many watermarks. Some people might find ways to erase them from AI images. Others might find ways to put counterfeit AI watermarks on real content. Or people may ignore watermarks altogether.<\/p>\n<p>In short, \u201c<a href=\"https:\/\/www.brookings.edu\/articles\/detecting-ai-fingerprints-a-guide-to-watermarking-and-beyond\/\" rel=\"noopener\">watermarks aren\u2019t foolproof \u2014 but labels help<\/a>,\u201d says Siddarth Srinivasan. He\u2019s a computer scientist at Harvard University in Cambridge, Mass. He reviewed the role of watermarks in a January 2024 report.<\/p>\n<p>Researchers will continue to improve tools to spot AI-produced files. Meanwhile, some people will keep <a href=\"https:\/\/www.snexplores.org\/article\/chatbot-jailbreaks-bad-ai\">working on ways to help AI evade detection<\/a>. And AI will get even better at producing realistic material. \u201cIt\u2019s an arms race,\u201d says Helmus at RAND.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"675\" height=\"450\" decoding=\"async\" src=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-675x450.jpg\" alt=\"a photo of Kamala Harris greeting other delegates\" class=\"wp-image-3139535\" srcset=\"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-675x450.jpg 675w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-574x383.jpg 574w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-279x186.jpg 279w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-768x512.jpg 768w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris-870x580.jpg 870w, https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_AI_safety_Kamala_Harris.jpg 1030w\" sizes=\"(max-width: 675px) 100vw, 675px\"\/><figcaption class=\"wp-element-caption\"><span class=\"caption wp-caption-3139535\">Government leaders, AI companies and researchers met for a two-day AI Safety Summit in Bletchley, England, in November 2023. Here U.S. Vice President Kamala Harris greets other delegates at one of the conference sessions.<\/span><span class=\"credit wp-credit-3139535\">WPA Pool\/Pool\/Getty Images News<\/span><\/figcaption><\/figure>\n<p>Laws can impose some limits on producing AI content. Yet there will never be a way to fully control AI, because these systems are always changing, says Nelson at the Institute for Advanced Studies. She thinks it might be better to focus on policies that <a href=\"https:\/\/www.snexplores.org\/article\/artificial-intelligence-ai-safety-good-behavior\">require AI to do only good and beneficial tasks<\/a>. So, no lying.<\/p>\n<p>Last October, President Biden issued an <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\" rel=\"noopener\">executive order<\/a> on controlling AI. It said that the federal government will use existing laws to combat fraud, bias, discrimination, privacy violations and other harms from AI. The U.S. Federal Communications Commission has already used a 1991 law to <a href=\"https:\/\/apnews.com\/article\/fcc-elections-artificial-intelligence-robocalls-regulations-a8292b1371b3764916461f60660b93e6\" rel=\"noopener\">ban robocalls with AI-generated voices<\/a>. And the U.S. Congress, which passes new laws, is considering <a href=\"https:\/\/www.brennancenter.org\/our-work\/research-reports\/artificial-intelligence-legislation-tracker\" rel=\"noopener\">further action<\/a>.<\/p>\n<h4 class=\"wp-block-heading\">What can you do?<\/h4>\n<p>Education is one of the best ways to avoid being taken in by AI fakery. People have to know that we can be \u2014 and often are \u2014 targeted by fakes, Helmus says.<\/p>\n<p>When you see news, images or even audio, try to take it in as if it could be true or false, suggests Mayo at the Hebrew University of Jerusalem. Then try to evaluate its reliability. She shared that advice in the April issue of <em>Current Opinion in Psychology<\/em>.<\/p>\n<p>Use caution in where you look for information, too, adds Hopkins at Flinders University. \u201cAlways seek medical information from reliable health sources, such as your doctor or pharmacist.\u201d And be careful about online sources \u2014 especially social media and AI chatbots, he adds. Check out the authors and their backgrounds. See who runs and funds websites. Always <a href=\"https:\/\/www.snexplores.org\/blog\/outside-comment\/fact-checking-how-think-journalist\">see if you can confirm the \u201cfacts\u201d somewhere else<\/a>.<\/p>\n<p>Nelson hopes that today\u2019s kids and teens will help slow AI\u2019s spread of bogus claims. \u201cMy hope,\u201d she says, \u201cis that this generation will be better equipped to look at text and video images and ask the right questions.\u201d<\/p>\n<aside class=\"sn-conversion rich-text\"\/><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.snexplores.org\/article\/artificial-intelligence-ai-deepfakes-trust-information\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Taylor Swift has scores of newsworthy achievements, from dozens of music awards to several world records. But last January, the mega-star made headlines for something much worse and completely outside her control. She was a target of online abuse. Someone had used artificial intelligence, or AI, to create fake nude images of Swift. These pictures [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":46789,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"tdm_status":"","tdm_grid_status":"","fifu_image_url":"https:\/\/www.snexplores.org\/wp-content\/uploads\/2024\/05\/1030_AI_disinformation_language_models-677x450.jpg","fifu_image_alt":"","footnotes":""},"categories":[606],"tags":[5758,37318,8242,2538,1844,10200],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/posts\/46788"}],"collection":[{"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/comments?post=46788"}],"version-history":[{"count":1,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/posts\/46788\/revisions"}],"predecessor-version":[{"id":46790,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/posts\/46788\/revisions\/46790"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/media\/46789"}],"wp:attachment":[{"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/media?parent=46788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/categories?post=46788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/news.talkwithrattan.com\/index.php\/wp-json\/wp\/v2\/tags?post=46788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}