Exploring the Social Imagination

Monday, April 24, 2023

Labeling in the Social Imagination, who Benefits... Re-visited

 

 

We tend to think we are original even to the point of stating publicly that we are radically different from our parents; especially when we were teens. And so, at that tender age of identity searching, we looked for labels to use to describe ourselves, our generation, our group. 

For example, young adults tend to search 'look around at' the 'modern' contemporary society (largely their school/social media and or entertainment industry including sports) in which they think they will either find their dream life or be able to fully achieve their 'wanna be' self. So, they locate their peers in such settings and grab onto the labels they use or are being told/guided to use as if they tell the truth about who we really are in the place where we are. 

However, by doing that, they fail to realize that they (in the place they are/from) are really not the same teen or young adult they imagine living in California or Chicago or New York. And, some live in Houston, Miami, Pittsburgh, or Milwaukee and every place in between. But, somehow teens and young adults see other teens as the same no matter where they are. This is both a problem for them and perhaps an inspiration. If they are inspired to live within their scope of abilities and means without getting labeled for it. And, that's the problem.

Labels stick... and the sticky complexity of labels goes back far into history. Alfred Schütz, somewhere in the beginning of 20th century, started exploring this complexity called "lifeworld" or the paramount reality which is the one you live in as a conscious member.

Schutz observed that all societies attempt to typify or label as in categorize the people and things within its geographical boundaries as well as those who appear 'visibly' outside of their given boundaries. Why? In order to better understand them within the context of their society. 

This kind of observation is correct. Why? Because, society, by necessity needs to reduce and or categorize the rapid, and vast, amounts of incoming information that is both known and unknown. Its called 'processing' in computer jargon. This process makes life easier for everyone in the group. 

Alfred Schutz also thought that in the paramount reality, where the processing of information happens, there exists  pre-formed packets of data... it can be illustrated as basic packages or default modes of information that were already experienced and labeled and saved for better, faster future processing and innovation.

Essentially, Schütz had concerns for the "dialectical relationship between the way people construct social reality and the given social and cultural reality that they inherit from those who preceded them in the social world", i.e. parents/teachers/community. And, he was right to consider that... not as 'oh that must be wrong or outdated', but that such inheritance of information packets has meaning and such meaning should be taken seriously worthwhile.

Why? Because, obviously why re-invent the wheel, right? And, well even animals learn from inherited experiences 'information' from their parents, their community. They learn who/what something is 'to them' and who/what something is not. Its not about being right or wrong, its more about what works in a place over time for a group of people in a given place over time. And, indeed, labeling is essentially part of that processing of who/what something is and is not.

Now, in Sociology and Psychology few have considered the significant impact of the ruling elite who truly have the power and position to control people especially those they need to use: recruits, employees, educators and lets not forget politicians. So, the elites create groups, they label them and build an agenda around the label. In this way, they take control of the group and the people. How, because by their labeling they take control away from those that get labeled. They denude them. 

This happens to women, to men, to Christians, to Muslims, to immigrants and refugees, to the white conservative, to the freedom lover, to the poor, to the disabled, to the handicapped, the single mom, to seniors, to children, and to babies, fetuses and even the LGBTs. They all get a label and are controlled by that label. And, when it is deemed necessary the controlling elite put a new twist on the label either as an upgrade or downgrade. Sadly, too many people actually believe their label is for their own good! Come on people, wake up!

Who benefits? The ruling elite benefit, they get to keep their identity and their position, their property, their license, their rules, their policies, their local ordinances, their zoning, their jurisdiction, and their wealth. Its ironic that they too are labeled but they make sure that the labels they get are to their advantage. Try as we might to run from the past and the labels of our past, make no mistake that more will only be heaped on you as you exist in the social imagination. 

The ruling elite are preparing their final label to use on everyone, to put everybody into the same box, the same group with the same label. And, it will be the mark/label like no other. By accepting the ruling elite's labels, instead of our family or community, we risk losing our identity in the place where we were born, are living and will die. Its wiser to strive to live better within the boundaries your parents/family or community or pastor set in the place where you are/were born than to be labeled or bound by some strangers.

 

*Its better yet to live labeled as a child of God, then as a child of the beast system.

Sunday, April 16, 2023

The Problem with AI Generated images and Content in the Social Imagination...

 

 


Will AI-generated images create a new crisis for fact-checkers?   Experts are not so sure...

Eliot Higgins, Marilín Gonzalo, Felix Simon and Valentina de Marval discuss the challenges posed by software such as Midjourney and Dall-E : By - Gretel Kahn, April 11, 2023.

Over the past few weeks, a number of improbable images went viral: former US President Donald Trump getting arrested; Pope Francis wearing a stylish white puffer coat; Elon Musk walking hand in hand with General Motors CEO Mary Barra. 

These pictures are not that improbable though: President Trump was indeed getting arrested; Popes are known to wear ostentatious outfits; and Elon Musk has been one half of an unconventional pairing before. What is peculiar though is that they are all fake images created by generative artificial intelligence software. 

AI image generators like DALL-E and Midjourney are popular and easy to use. Anyone can create new images through text prompts. Both applications are getting a lot of attention. DALL-E claims more than 3 million users. Midjourney has not published numbers, but they recently halted free trials citing a massive influx of new users. 

While the most popular uses of generative AI so far are for satire and entertainment purposes, the sophistication of their technology is growing fast. A number of prominent researchers, technologists and public figures have signed an open letter asking for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, a large language model created by US company Open AI. “Should we let machines flood our information channels with propaganda and untruth?” they ask.

 I spoke to several journalists, experts, and fact-checkers to assess the dangers posed by visual generative AI. When seeing is no longer believing, what are the implications this technology has on misinformation? How will this impact journalists and fact-checkers who debunk hoaxes? Will our information channels be flooded with “propaganda and untruth”?

On 20 March, journalist Eliot Higgins, founder of Bellingcat, tweeted a series of images he made using Midjourney. The pictures depicted a narrative around former US President Donald Trump’s criminal conviction: from fictional arrest to fictional escape from prison. The pictures quickly went viral and Higgins was subsequently locked out of the AI image generator’s server. 

“The thread I posted proves how quickly images that appeal to individuals' interests and biases can become viral,” Higgins says. “Fact-checking is something that takes a lot more time than a retweet.”

For those who work to debunk disinformation, the rise of AI generated images is indeed a growing concern since a big proportion of the fact-checking they do is image or video-based. Marilín Gonzalo writes a technology column at Newtral, an independent Spanish fact-checking organization. She says that visual disinformation is a particular concern since images are especially compelling and they can have a strong emotive impact on audiences’ perceptions. 

“You can talk to a person for an hour and give him 20 arguments for one thing, but if you show him an image that makes sense to him, it is going to be very difficult to convince him that’s not true,” Gonzalo says. 

Chilean journalist Valentina de Marval, a professor of journalism in the Universidad Diego Portales with previous fact-checking experience for agencies like AFP, Chicas Poderosas and LaBot Chequea, is also worried about the rise of AI-generated images. While there are clues to these images that show they are fake, like hands, teeth or ears, De Marval is concerned that the rapid improvement of these models will render these indicators obsolete.

“Maybe in a couple of months or days artificial intelligence will have learned, for example, to draw hands well, to outline the eyes well, to put teeth or ears, to make the skin less smooth and make it more real with imperfections,“ she says.

Despite concerns that AI generated imagery might lead to a truth crisis, experts like Felix Simon, a communication researcher and a PhD student at the Oxford Internet Institute, warns against taking an alarmist view on these new technologies saying that its proliferation does not necessarily equate to more people believing in those images. 

“The relationship between image and truth has always been unstable,” says Simon. “One could say that what we see with generative AI is just a continuation of that. Many people will get used to it. They will develop defense mechanisms both on a personal level but also on institutional level, where news organizations will probably go to greater lengths to check if images show what they claim to show.”

Simon says that concerns about a new image-based information warfare and the proliferation of fake news date back to the days when photography was introduced to newsrooms. More recently, concerns about the impact of deep fakes have been around for years

Even going a few years back, similar concerns regarding image-based fake news emerged when Photoshop became accessible to the public. Even a few days ago, a suggestive Playboy magazine cover of French government minister Marlène Schiappa went viral. The image was quickly proven to be faked, done through a photomontage of the face of the politician and the body of another woman.

 Bellingcat’s Higgins believes that AI-generated images are a phenomenon that will be most likely contained to social media platforms rather than being something that reaches anywhere near the mainstream media. He also thinks that that fake images will be debunked as they go viral.

“The kind of people who are trying for a certain degree of mainstream legitimacy aren't going to let themselves be called out constantly by sharing fake images,” he says. “I really think it is going to be something that is more about kind of gut reactions and memes, rather than anyone serious campaigning around fake images.”

However, what concerns fact-checkers is not necessarily what these software produces, but the speed in which they are produced. News organizations will not only have to properly verify information but do so in a timely manner to avoid an information vacuum. 

Unlike Photoshop or deep fake softwares, DALL-E and Midjourney are able to generate media within seconds with just a few text prompts. Gonzalo calls this phenomenon ‘a digital fire,’ the rapid distribution of a fake image or video through social media platforms. “This is a constant concern for fact-checkers because we can't see what is moving at the level of WhatsApp groups or other messaging groups and this runs very fast because it is a viral type of distribution,” she says. 

De Marval thinks fact-checkers will have to adapt their methodology and rhythms to be able to catch-up to the potential influx of synthetic images. “Verification methods have to be adapted and streamlined in all newsrooms so they can process videos and images before showing them,” she says. 

 De Marval says the issue of disinformation goes beyond emerging tech and is related with the erosion of institutional trust. “We are never going to have enough journalists,” says De Marval. “There is a loss of prestige in the profession of journalism and a loss of prestige of institutions and politics in general. The more the media and state institutions are discredited, the more disinformation will circulate.”

While generative AI certainly contributes to an increase in the scale of production of mis- and disinformation, Simon thinks that claims that this technology might lead to the end of truth are problematic. “It is not necessarily that people will be more easily fooled, but rather that people will become slightly more skeptical of information in general, including trustworthy information,” he says. 

This has problematic implications for a media environment where trust in news is already eroded. Our own Digital News Report 2022 showed that trust in news is on the decline with only 42% of people from our global sample saying they trust most news most of the time.

The most recent report from our Trust in News Project found that trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally. The study also details how a large proportion of people believe that false and misleading information and platforms using data irresponsibly are ‘big problems’ for many of these platforms in their countries.

 “[What we've seen recently] has led to a much broader awareness of what you can do with these generation systems,” says Higgins. “Wherever that leads to people being a bit more cynical about what they're seeing, it might go too far the other way where people just refuse to believe any image.”

This raises the question of what responsibilities do these AI startups have in setting their content apart from real images and videos. Those whom I spoke to advocate for more transparency from these companies to make it easier for users to distinguish if an image was generated through AI or not, such as introducing watermarks. 

Some news organizations have also been working to develop tools to let audiences know that their content is real. For example, Project Origin is a collaborative project between media organizations like the BBC, CBC/Radio-Canada, the New York Times and tech organizations like Microsoft that is developing signals, like cryptographic verification marks, that would be tied to media content to prove the authenticity and source of a given piece of content, like an image or a video. 

Adobe’s recently introduced image-generating tool Firefly will include ‘content credentials’ in each image or a label that would tell users if an image was created by AI or not, according to the company’s Chief Trust Officer Dana Rao. Rao cited the fight against misinformation and sorting what is real and what is fake going forward as one of the reasons why the company is introducing this caveat. 

The sources in this piece are concerned about other ethical questions, particularly about the data that these models are being trained on. All the viral examples I’ve mentioned portray real people, raising ethical questions about the data these programmers are being trained on. Midjourney has already limited which public figures it allows users to generate images of. It doesn’t generate images of China’s president, Xi Jinping. However, this was not done out of privacy concerns but to “minimise drama,” according to the company’s founder and CEO David Holz, who wrote this in a post on the chat service Discord, as reported by the Washington Post

 “What they're doing is clearly training on real people,” says Higgins. “There is the ethical consideration of, do they have the right to train these things on real people who haven't given their consent?”

A number of these AI-generators such as DALL-E are trained using millions of public text-image pairs from the internet. “Donald Trump is a person who also has his personal data rights,” says Gonzalo. “Now people say ‘Well but if you put your data on the internet…’ No! I can put my data on the internet and that does not mean that I have to give up my right to data protection.”

Experts say we can diminish the impact of AI-based misinformation by fostering media literacy and educating citizens in personal fact-checking techniques (Really? The problem with that suggestion is that so called educated people are and have been rewriting history and or tearing it down. Facts, as in real numbers/percentages, are already manipulated and thus fake).

“It's not a runaway situation where this technology arises, then everything's going to change overnight and there's no way we can stop that in any way,” says Simon. “There's always ways to sort of hand that in and range in.” 

Journalists and fact-checkers are already working on increasing the media literacy of their audiences so that they don’t fall for misinformation (Really? They are the biggest part of the problem because they are already compromised as to the truth).What we're trying to do at Bellingcat is take a more education-driven approach, where we're working with schools and universities to train students and teach them about these skills, ideas, and concepts,” says Higgins. These workshops aim to increase media literacy among students and teachers with techniques ranging from teaching them about fact-checking verification techniques as well as informing on what is possible now when it comes to fake images. 

De Marval, who teaches a fact-checking course for her university students, says that the most important thing is to look at the context around the image and question who is distributing these ‘news’: the more politically incendiary an image is, the more hesitant we should be about its veracity. “No matter how much fact-checking we do or if all newsrooms are verifying all content, it will be of little use if people are not educated,” she says.

COMMENTARY ~ A picture paints a thousand words... especially, in the social imagination. There is an old anthropology joke about how language was invented and I have told it before. I won't retell it now, but the punch line is - "It ain't what it looks like".

 “The time is coming when you will long to see one of the days of the Son of Man, but you will not see it. People will tell you, ‘Look, there He is!’ or ‘Look, here He is!’ Do not go out or chase after them". ~ LUKE 17: 22-23.

 

 

ONLINE SOURCE ~  https://reutersinstitute.politics.ox.ac.uk/news/will-ai-generated-images-create-new-crisis-fact-checkers-experts-are-not-so-sure