Artificial intelligence can help us to better understand the causes of religious violence and to potentially control it, according to a new Oxford University collaboration. The study is one of the first to be published that uses psychologically realistic AI – as opposed to machine learning.
The research published in The Journal for Artificial Societies and Social Stimulation, combines computer modelling and cognitive psychology to create an AI system able to mimic human religiosity. An approach which allows for better understanding of the conditions, triggers and patterns for religious violence.
The study is built around the question of whether people are naturally violent, or if factors such as religion can cause xenophobic tension and anxiety between different groups, that may or may not lead to violence?
The findings reveal that people are a peaceful species by nature. Even in times of crisis, such as natural disasters, people tend to bond and come together. However, in a wide range of contexts they are willing to endorse violence – particularly when others go against the core beliefs which define their identity.
Conducted by a cohort of researchers from universities including Oxford, Boston University and the University of Agder, Norway, the paper does not explicitly simulate violence, but, instead focuses on the conditions that enabled two specific periods of xenophobic social anxiety, that then escalated to extreme physical violence.
Justin Lane, a DPhil student in the Institute of Cognitive & Evolutionary Anthropology, who is a co-author on the work, and led the design of the model used and data collection, said: ‘Religious violence is not our default behaviour – in fact it is pretty rare in our history.’
Although the research focuses on specific historic events, the findings can be applied to any occurrence of religious violence, and used to understand the motivations behind it. Particularly events of radicalised Islam, when people’s patriotic identity conflicts with their religions one, e.g. the Boston bombing and London terror attacks. The team hope that the results can be used to support governments to address and prevent social conflict and terrorism.
The paper focuses on two cases of extreme violence, firstly, the conflict commonly referred to as the Northern Ireland Troubles, which is regarded as one of the most violent periods in Irish history. The conflict, involving the British army and various Republican and Loyalist paramilitary groups, spanned three decades, claimed the lives of approximately 3,500 people and saw a further 47,000 injured.
Although a much shorter period of tension, the 2002 Gujurat riots of India were equally devastating. The three-day period of inter-communal violence between the Hindu and Muslim communities in the western Indian state of Gujarat, began when a Sabarmarti Express train filled with Hindu pilgrims, stopped in the, predominantly Muslim town of Godhra, and ended with the deaths of more than 2,000 people.
Of the study’s use of psychologically realistic AI, Justin said: ‘99% of the general public are most familiar with AI that uses machine learning to automate human tasks like – classifying something, such as tweets to be positive or negative etc., but our study uses something called multi-agent AI to create a psychologically realistic model of a human, for example – how do they think, and particularly how do we identify with groups? Why would someone identify as Christian, Jewish or Muslim etc. Essentially how do our personal beliefs align with how a group defines itself?’
To create these psychologically realistic AI agents, the team use theories in cognitive psychology to mimic how a human being would naturally think and process information. This is not a new or radical approach – but it is the first time it has been applied physically in research. There is an entire body of theoretical literature that compares the human mind to a computer programme – but no one has taken this information and physically programmed it into a computer, it has just been an analogy. The team programmed these rules for cognitive interaction within their AI programme, to show how an individual’s beliefs match up with a group situation.
They did this by looking at how humans process information against their own personal experiences. Combining some AI models (mimicking people) that have had positive experiences with people from other faiths, and others that have had negative or neutral encounters. They did this to study the escalation and de-escalation of violence over time, and how it can, or cannot be managed.
To represent everyday society and how people of different faiths interact in the real world, they created a simulated environment and populated it with hundreds – or thousands (or millions), of the human model agents. The only difference being that these ‘people’ all have slightly different variables – age, ethnicity etc.
The simulated environments themselves have a basic design. Individuals have a space that they exist in, but within this space there is a certain probability that they will interact with environmental hazards, such as natural disasters and disease etc and at some point, each other.
The findings revealed that the most common conditions that enable long periods of mutually escalating xenophobic tension occur when social hazards, such as outgroup members who deny the group’s core beliefs or sacred values, overwhelm people to the point that they can no longer deal with them. It is only when people’s core belief systems are challenged, or they feel that their commitment to their own beliefs is questioned, that anxiety and agitations occur. However, this anxiety only led to violence in 20% of the scenarios created – all of which were triggered by people from either outside of the group, or within, going against the group’s core beliefs and identity.
Some religions have a tendency to encourage extreme displays of devotion to a chosen faith, and this can then take the form of violence against a group or individual of another faith, or someone who has broken away from the group.’
While other research has tried to use traditional AI and machine learning approaches to understand religious violence, they have delivered mixed results and issues regarding biases against minority communities in machine learning also raise ethical issues. The paper marks the first time that multi-agent AI has been used to tackle this question and create psychologically realistic computer models.
Justin said: ‘Ultimately, to use AI to study religion or culture, we have to look at modelling human psychology because our psychology is the foundation for religion and culture, so the root causes of things like religious violence rest in how our minds process the information that our world presents it.’
Understanding the root cause of religious violence allows people to use the model to both contain and minimise these conflicts, as well as increase them. However, used effectively, this research can be a positive tool that supports stable societies and community integration.
Off the back of this research, the team have recently secured funding for a new two-year project with the Center for Modeling Social Systems in Kristiansand, Norway. The work will help the Norwegian government to optimise the refugee integration process by studying demographic shifts related to immigration and integration in Europe such as the Roma in Slovakia, and the resettlement of Syrian refugees in Lesbos to Norway.
The Latest on: Psychologically realistic AI
via Google News
The Latest on: Psychologically realistic AI
- Keras inventor Chollet charts a new direction for AI: a Q&Aon November 26, 2019 at 11:39 am
Initially I was coming at it from the perspective of neuropsychology and developmental psychology. I then moved towards AI, in particular "cognitive developmental robotics ... Although that would be ...
- The Evolutionary Nature of Crisis Communicationon November 21, 2019 at 11:44 am
Ironically, the limitations of our imperfect and often irrational minds make it harder for consumer attitudes to be predicted by AI, at least for the time being. That’s why combining these new ...
- Ensure Customers Are Comfortable With Your AI Assistant, Not Creeped Outon November 18, 2019 at 6:03 am
In reality, the response is rooted in a psychological concept that’s been discussed for decades. It’s called the uncanny valley, and it’s not limited to AI-powered interactions ... As technology ...
- Out of the Shadows: New Mysteries & Thrillers 2019–2020on November 15, 2019 at 4:08 pm
For some authors, that means bucking the psychological suspense trend and exploring other corners of the genre ... “There’s some comfort in reading about a supernatural threat in a period when you’re ...
- Thou shalt not kill: artificial intelligence explores the influence of religion on xenophobic violenceon November 6, 2018 at 4:00 pm
Justin Lane (University of Oxford), a co-author on the study explained: “Our study uses something called multi-agent AI to create a psychologically realistic model of a human, for example – how do ...
- Scientists create AI agents to decode cause of religious conflicton November 4, 2018 at 7:24 am
said one of the study authors Justin Lane from the University of Oxford. "But our study uses something called multi-agent AI to create a psychologically realistic model of a human, for example — how ...
- Using AI to explain religious conflicton November 4, 2018 at 5:11 am
An international team of cognitive, computer and social scientists has taken tentative steps in this direction by creating a model which uses “psychologically realistic AI” to try to better understand ...
- Does religion make people more violent? AI experiment throws light on questionon November 1, 2018 at 5:11 am
The researchers used ‘psychologically realistic’ AI to model a society – and how religion affected the way people react to pressures such as outside groups. The researchers found that in some ...
- Researchers Simulated Religious Groups With AI to Try to Understand Religious Violenceon October 31, 2018 at 9:17 am
“Our study uses something called multi-agent AI to create a psychologically realistic model of a human,” Justin Lane, a researcher at the University of Oxford’s Institute of Cognitive and Evolutionary ...
via Bing News