Lisa Marie Presley’s death, AI problems and more news literacy lessons

Comment

Here’s the latest installment of a regular feature I’ve been running for several years: lessons from the nonprofit News Literacy Project (NLP), which aims to teach students and the public how to sort fact from fiction in our digital — and contentious — age. There has never been a time in recent U.S. history when this skill has been as important as now because of the spread of rumors and conspiracy theories on social and partisan media sites.

The material in this post comes from the Sift, the organization’s newsletter for educators, which has nearly 22,000 subscribers. Published weekly during the school year, it explores timely examples of misinformation, addresses media and press freedom topics, explores social media trends and issues, and includes discussion prompts and activities for the classroom. Get Smart About News, modeled on the Sift, is a free weekly newsletter for the public.

The NLP has an e-learning platform, Checkology, that helps educators teach middle and high school students how to identify credible information, seek out reliable sources and know what to trust, what to dismiss and what to debunk.

It also gives them an appreciation of the importance of the First Amendment and a free press. Checkology and all of the NLP’s resources and programs are free. Since 2016, more than 42,000 educators and 375,000 students in all 50 states, the District of Columbia and more than 120 other countries have registered to use the platform.

Material from the Jan. 23 edition of the Sift:

Dig deeper: Don’t miss this week’s classroom-ready resource.

A graph from the Edelman Trust Barometer report shows political polarization in 28 countries surveyed. The United States, Colombia, Argentina, South Africa, Spain and Sweden are the top six severely polarized countries.

1. The United States is one of six “severely polarized” countries listed in the 2023 Edelman Trust Barometer, among the 28 countries surveyed. The driving factors behind this polarization, according to the report, include distrust in media and government, lack of shared identity, systemic unfairness and economic pessimism. Notably, only 34 percent of people with a polarized mind-set trust media.

Discuss: How can disinformation lead to political polarization? What role do you think online “echo chambers” play in terms of political polarization? How does the inclusion of multiple viewpoints make the national conversation stronger/better?

Idea: Use the NLP’s Newsroom to Classroom program to connect students with a journalist in person or online to discuss standards in news reporting and trust in media.

Resources: “Misinformation” and “Is it legit?” (NLP’s Checkology® virtual classroom).

2. Several errors were recently found in stories generated by artificial intelligence (AI) on CNET, a popular consumer tech news website. Other news outlets have criticized CNET for its lack of transparency around this practice; the site used “CNET Money Staff” as a byline for AI-generated stories and failed to make a public announcement about it.

Following this criticism, CNET’s editor in chief wrote a post explaining that the news site began experimenting with AI in November and noting that 75 CNET articles written by AI and edited by humans had been published since then. CNET is now reportedly pausing its AI usage. Meanwhile, a Futurism report found that CNET’s AI articles included plagiarized work — a serious allegation that may diminish trust for CNET readers.

Discuss: If you were a news editor, would you consider using AI to generate stories? Why or why not? What might lead some media outlets to consider “automated journalism?” Is it ethical to publish stories written by AI without clearly disclosing it to readers? Why do you think some news organizations turn to AI as a resource? How is AI currently present in your life?

Idea: Ask students to share their favorite news websites. As a class or in small groups, visit the news sites and share observations about the bylines for each story. How are they represented? Are the reporters clearly credited and identified for each story? Is there contact information for them? Are some stories credited to staff or other, less transparent entities? Are any credited to AI? Why is it important for standards-based news outlets to be transparent about who (or what) is writing or generating their stories?

Note: Reputable news organizations have utilized AI technology in stories. For example, the Associated Press began using AI in 2014 for different projects, including automated stories about corporate earnings and sports.

“A news site used AI to write articles. It was a journalistic disaster.” (Paul Farhi, The Washington Post).

“Inside CNET’s AI-powered SEO money machine” (Mia Sato and James Vincent, the Verge).

Opinion: “Local news will come to rely on AI” (Bill Grueskin, Nieman Lab).

Dig Deeper: Use this think sheet to take notes on the implications of AI generating stories for CNET.

3. Anti-vaccination conspiracy theories surrounding “sudden deaths” like singer-songwriter Lisa Marie Presley and radio DJ Tim Gough continue to spread on Twitter, which no longer enforces its covid-19 misinformation policy and recently restored previously banned accounts. After renowned sports journalist Grant Wahl died of an aortic aneurysm in December, his widow said in an NPR interview that she still receives harassing messages from conspiracy theorists, including one who blamed her for killing her husband through vaccination. “Grant did not deserve that. My family does not deserve that,” she said.

Discuss: Why do you think misinformation about coronavirus vaccines continues to spread online? How does mis- and disinformation online affect people offline? How should social media companies deal with false claims about vaccines?

As World Economic Forum begins, conspiracy theorists escalate their claims

YES: World leaders at the WEF conference typically discuss major global problems and consider how to address them.

NO: The WEF did not ban vaccinated pilots from transporting industry leaders to the conference.

NO: The WEF did not establish a worldwide “15-minute city” zone that would prohibit people from traveling outside of this zone.

NO: The WEF did not publish a statement declaring that pedophilia would save the world.

NO: WEF founder and Executive Chairman Klaus Schwab did not detail a plan to launch a global cyberattack to bring vital services to a halt.

NewsLit takeaway: Each year, world leaders gather in Davos, Switzerland, for the WEF’s annual meeting — and each year, conspiracy theorists meet online to take their statements out of context, falsely interpret their videos and conjure up rumors out of whole cloth.

Many of the conspiratorial claims paint the nongovernmental organization as an all-powerful entity like the illuminati or the New World Order, wielding power in secret and supposedly enacting global policies to fit its own agenda. But that isn’t the case. The WEF cannot make declarations that the rest of the world must follow. Furthermore, the forum often involves planning exercises that allow leaders to theorize and practice strategies they would implement in case of a catastrophe. This makes it particularly easy for conspiracy theorists and other bad actors to take these practice sessions out of context and misrepresent them online.

No, Rep. Ayanna Pressley (D) didn’t say ‘IQ is a measure of whiteness’

NO: Pressley (D) did not post a tweet that says, “IQ is a measure of whiteness.”

YES: This is a fabricated tweet that never appeared in Pressley’s Twitter timeline.

NewsLit takeaway: Fake tweets often go viral when they reinforce the preconceived beliefs and convictions of a significant number of people. In January, commentary about the supposed teaching of critical race theory found its way back into the news cycle when the Florida Department of Education disallowed a new Advanced Placement course on African American studies to be offered in the state’s high schools. This may have helped this old fake Pressley tweet — which was originally published on the internet message board 4chan in June 2021 — to go viral again.

Confirmation bias can narrow perspectives and even foster extreme political beliefs based on exaggerated caricatures of perceived political opponents. Avoid falling for these outrage bait traps by recognizing these biases and taking care to base political opinions on verified information.

You can find this week’s rumor examples to use with students in these slides.

• A Maine newspaper faced backlash after running a heavily redacted version of Martin Luther King Jr.’s “I Have a Dream” speech on its editorial page. The editorial board published an apology and pledged to be “a voice for equality, freedom and justice.”

• Following a mass killing after a Lunar New Year celebration in Monterey Park, Calif., the Asian American Journalists Association released guidelines for journalists covering violence — including centering “community experiences and victims’ and survivors’ stories.”

• At least 40 journalists were targeted with threats and physical violence during or after the Jan. 8 riot at the Brazilian capital, according to the Committee to Protect Journalists.

• Twitter is failing to enforce its own policies against climate misinformation. Tweets containing climate-change-denying language saw a 300 percent increase last year, according to an Advance Democracy report.

TikTok bans on some college campuses are being criticized by students and internet freedom advocates as censorship. Others note that the bans are ineffective, since students can still access the app using cellular data on their personal devices.

TikTok will now label posts from state-controlled media outlets in 40 countries, including the U.S. The label was initially piloted last year after the Russia-Ukraine war began.

• ICYMI: In case you missed it, the most-clicked link in the last issue of the Sift was this story about a psychic on TikTok who falsely accused a professor of the murders of four Idaho students.

Source link