Trending News
Home / Technology / AI & IT / Fighting terrorists on Social media using Artificial Intelliegence  and data analytics to cross-platform and transnational cooperation

Fighting terrorists on Social media using Artificial Intelliegence  and data analytics to cross-platform and transnational cooperation

Terrorist groups  hve been increasingly using social media platforms like YouTube, Facebook and Twitter to further their goals and spread their message, because of its convenience, affordability and broad reach of social media.

 

In a study by Gabriel Weimann from the University of Haifa, found that, terror groups use social media platforms like Twitter, Facebook, YouTube and internet forums to spread their messages, recruit members and gather intelligence.  Articles in IS propaganda have appeared giving instructions on how to carry out attacks, such as the “Just Terror” section in Rumiyah, IS’ latest English language publication, and instructional videos, such as “How to slaughter the disbelievers”, released by IS.

 

Social media platforms have a huge global reach and audience, with YouTube boasting more than 1 billion users each month. This breaks down into 6 billion hours of video that are being watched each month and 100 h of video are uploaded to YouTube every month (YouTube Statistics, 2014). Similarly, Twitter has on average 350,000 tweets being sent per minute and 500 million tweets per day (Twitter, 2014), whilst Facebook remains the largest social media network with 500 million active users and 55 million people sending updates (Fiegerman, 2014).

 

Al-Qaeda has an Internet presence spanning nearly two decades. Al-Qaeda terrorists use the internet to distribute material anonymously or ‘meet in dark spaces’. The Czech Military Intelligence Service commented that Al-Qaeda are spreading its ideology among the Muslim community in Europe, mainly through the means of social media.

 

Taliban militants fighting the Afghan government and NATO-led forces to regain power in the militancy- plagued Afghanistan, have been enormously using the internet in the propaganda war. Today, the Taliban outfit has several websites to publish their military and political activities and put on wire for its readers across the globe. The Taliban has been active on Twitter since May 2011, and has many thousands of followers.

 

Isis has proved fluent in YouTube, Twitter, Instagram, Tumblr, internet memes and other social media. Its posting activity has ramped up during a recent offensive, reaching an all-time high of almost 40,000 tweets in one day as they marched into the northern Iraqi city of Mosul.

 

Challenges of fighting online propaganda of Terrorists

Research on terrorist and violent extremist trends shows that a single online terrorist campaign often uses three or more platforms. A smaller, less-regulated platform is usually used for private coordination—such as an end-to-end encrypted chat platform. A second platform is used for storing original copies of propaganda and media; think cloud storage or similar file-sharing sites. Additionally, core members or sympathizers disseminate strategized content on larger social media platforms, or “amplification” outlets, which will inevitably be the well-known platforms that everyone uses to gain the most traction. Fighting terrorism online requires addressing this interplay, but any single platform or company lacks visibility into the trends elsewhere online.

 

Without strong on-platform signals, such as text or images that might be shared alongside a link, they don’t inherently know that a URL link shared on their platform leads to violating content if that content is hosted on a third-party platform. They also often can’t tell that a user is a “terrorist” or “violent extremist” without obvious signals on their platforms. Research looking at the outlinks associated with one Islamic State publication showed that URLs shared on Twitter alone linked to 244 different content-hosting platforms, largely housed on lesser-known micro-sites, such as 4shared.com, cloudup.com or cloud.mail.ru.

 

Twitter noted that “there is no ‘magic algorithm’ for identifying terrorist content on the Internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance.” Further, the same users could quickly open new accounts. Further, It said “As an open platform for expression, we have always sought to strike a balance between the enforcement of our own Twitter Rules covering prohibited behaviors, the legitimate needs of law enforcement, and the ability of users to share their views freely – including views that some people may disagree with or find offensive.

 

The indicators of violent extremism in the United States can’t be expected to look identical to indicators across Europe, Asia and Africa. Every region and country has its own violent extremist and terrorist organizations, each with specific sociopolitical histories that often include coded symbols, slogans and slurs. Yet not every tech company has the capacity to hire tens of thousands of moderators around the world. Only the largest monetized companies can afford this internal support infrastructure. Cross-sector efforts and public-private partnerships will remain key, particularly for the smaller platforms relying on third-party intel and tooling.

 

Johnson says that social networks have to balance the need to remove such content with concerns around free speech. She says: “Social communities are insistent that there has to be a clear sense of when free speech becomes hate speech, to ensure content can be removed without impinging on human rights.” The key is communication, Johnson believes. Between governments, technology companies and social media users. She says: “Collaboration and communication will get us to a solution sooner.”

 

However, as major tech companies get better at ridding their platforms of gory videos and calls to commit violence, terrorists are finding new ways to post their messages, said Kirstjen Nielsen, secretary of the US Department of Homeland Security. “They’ve continued to demonstrate their will,” Nielsen said, noting that blogs, chat rooms and encrypted chat apps can serve as ways for terrorist groups to radicalize and recruit new members.

 

The leaders of Islamic State (IS) and Al Qaeda and their subsidiaries—survive because they are quick to adapt to changes in the physical and virtual battlespace, writes Philip Seib in Fortune.  “For some of their online communication, this has meant moving from the easily accessible “surface web” to the “deep web” and then on to its deepest part, the “dark web.” This is where one can find drugs, pornography, weapons, and other contraband. The dark web is out of reach of the most common search engines, such as Google, and is difficult for hackers to penetrate. IS warehouses its propaganda videos and other material on dark sites, and has raised and transferred money using the dark web’s currency of choice, Bitcoin.”

 

Countering terrorist propaganda

The G20 leaders underlined that appropriate filtering, detecting and removal of content that incites terrorist acts is crucial. “… we also encourage collaboration with industry to provide lawful and non-arbitrary access to available information where access is necessary for the protection of national security against terrorist threats,” they said.

 

The ability of tech companies to share risk mitigation tools across platforms, as well as to work with governments and civil society to share trends and advance compatible crisis response frameworks, has come a long way in recent years. These endeavors have been fostered through initiatives like the Global Internet Forum to Counter Terrorism (GIFCT), where I work; Tech Against Terrorism (TAT); and the Global Network on Extremism and Technology (GNET). These organizations and networks work collaboratively with wider government-led forums—such as the EU Internet Forum, the United Nations’ Counter-Terrorism Executive Directorate and the Christchurch Call to Action—in an effort to advance tech companies’ efforts to self-regulate and increase proactive responses.

 

Global Internet Forum fighting Terrorism online

In June, 2017, Facebook, Microsoft, Twitter and YouTube announced  the formation of the Global Internet Forum to Counter Terrorism, which will help us continue to make our hosted consumer services hostile to terrorists and violent extremists.

 

This consortium will pool technology, research, and counterterrorism tactics including “counter-speech,” which tries to prevent terrorist recruitment and incitement. “The forum we have established allows us to learn from and contribute to one another’s counter-speech efforts, and discuss how to further empower and train civil society organisations and individuals, who may be engaged in similar work and support ongoing efforts such as the civil society empowerment project (CSEP).

 

The spread of terrorism and violent extremism is a pressing global problem and a critical challenge for us all. We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services.  “Our mission is to substantially disrupt terrorists’ ability to use the internet in furthering their causes, while also respecting human rights. This disruption includes addressing the promotion of terrorism, dissemination of propaganda, and the exploitation of real-world terrorist events through online platforms.”

To achieve this, we will join forces around three strategies:

  • Employing and leveraging technology
  • Sharing knowledge, information and best practices, and
  • Conducting and funding research.

 

While Facebook said it took action on 1.9 million pieces of Islamic State (IS) and Al-Qaeda content in the first quarter of 2018, Twitter said on 5 April that it had suspended over 1.2 million accounts for terrorist content since August 2015. On 23 April, Google-owned YouTube said it had removed over 8 million videos during October-December 2017, of which 6.7 million were first flagged for review by machines rather than humans.

 

Facebook has a counterterrorism team of 200 people—it was 150 in 2017. According to a 23 April note by Monika Bickert, vice-president of global policy management and Brian Fishman, global head of counterterrorism policy, “the challenge of terrorism online isn’t new (but) has grown increasingly urgent as digital platforms become central to our lives”.About 99% of the IS and Al-Qaeda-related terror content the company removes is content it detects “before anyone in our community has flagged it to us, and in some cases, before it goes live on the site”.

 

Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once the company is aware of a piece of terror content, it also removes “83% of subsequently uploaded copies within one hour of upload”.

 

Similarly, in its 12th biannual Twitter Transparency Report released on 5 April, the micro-blogging site said that between 1 August 2015 and 31 December 2017, it had suspended a total of 1,210,357 accounts for violations related to the promotion of terrorism. Further, during the reporting period of July 1, 2017 through December 31, 2017, a total of 274,460 accounts were permanently suspended for violations related to the promotion of terrorism of which 93% were flagged by internal, proprietary tools, and 74% of those accounts were suspended before their first tweet.

 

Google, on its part, introduced machine learning flagging in June 2017. “Now more than half of the videos (on YouTube) we remove for violent extremism has fewer than 10 views,” it said on 23 April. GIFCT has a shared industry database of “hashes” (unique digital fingerprints of terrorist media).

 

Facebook’s Monika Bickert and Brian Fishman say that the human element still needs to play a key role in identifying terrorist content. “We still rely on specialised reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its ‘decision’ indicates it will be more accurate than our human reviewers.“We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organisations that generate this material in the real world.”

 

The coalition said the companies share both technology and operational activities  that it will refine and improve existing joint technical work, such as the Shared Industry Hash Database, and define standard transparency reporting methods for terrorist content removal. It will also be working directly with governments, civil society groups, academics and other companies to share information about the latest terrorist activities.

 

The Global Internet Forum to Counter Terrorism (GIFCT) announced in Dec 2021 that Zoom had joined the group. The forum was founded by Facebook, Microsoft, Twitter and YouTube in 2017 and now has 18 members. Other members include WhatsApp, Pinterest, Dropbox, Discord and Amazon. Non-members like Reddit and Snap Inc. are also able to access the organization’s database. News of Zoom’s inclusion in the group was first reported by Reuters.

 

Zoom, which was started in 2011, skyrocketed in popularity soon after the COVID-19 pandemic began in 2020 as more people gathered online for work, school and to socialize. One common form of harassment on Zoom is “zoom-bombing,” when an uninvited guest logs into and interrupts a meeting, often blasting abusive, racist and misogynistic slurs. Meeting hijackers may also display disturbing or pornographic images. In Dec 2021, authorities in Washington state announced they were investigating a recent incident in which a school board meeting was interrupted by a group or individual who showed images of George Floyd and repeated racial slurs.

 

To date, GIFCT and others have worked on cross-platform efforts for its member companies. This includes things like a hash-sharing database to share hashes, or “digital fingerprints,” of photos and videos that have been identified as terrorist content so that platforms can track and remove it if necessary. Learning from progress made in the child-safety space, hashed versions of labeled terrorist content allow the identifiers of terrorist content to be shared without sharing any user data or personally identifiable information.

 

Since its foundation, GIFCT companies have contributed over 300,000 unique hashes, or digital fingerprints, of known terrorist images and video propaganda to our shared industry database, so member companies can quickly identify and take action on potential terrorist content on their respective platforms. We have made progress in large part by working together as a collective of technology companies, but we have also partnered with experts in government, civil society and academia who share our goal. For example, by working with Tech Against Terrorism, a UN Counter-Terrorism Committee Executive Directorate-mandated NGO, GIFCT has brought over 140 tech companies, 40 NGOs and 15 government bodies together in workshops across the world to date. In 2019, we held four workshops—in the US, Jordan, India and the UK—to discuss and study the latest trends in terrorist and violent extremist activity online.

 

Bonworth says: “Currently, almost all social media firms use self-policing to combat the spread of terrorist content online. The issue here is whether it is reasonable to expect social media organisations to achieve 100-per-cent efficiency when removing extremist content, balanced against awareness of the potential damage of such material being available in the public domain, including online radicalisation.” In a report released earlier in Feb 2019, Facebook said that of the pro-terrorist posts that do manage to make it past the automatic tools and are then reported by users, they now generally stayed up for just 18 hours, down from 43 hours last year.

 

In June 2018, at the Global Internet Forum to Counter Terrorism, Home Secretary Sajid Javid acknowledged that tech companies had “made progress” in dealing with the issue, but said there is “more to do”. In a speech earlier this year, the European Commission president Jean-Claude Juncker suggested that technology companies should be given an hour to remove such content, or face fines up to 4 per cent of annual revenue. The proposal needs backing from EU countries to become law.

 

However according to opinion of Congressman Max Rose, who chairs the House Homeland Security Subcommittee on Intelligence and Counterterrorism, The viral spread of the Christchurch video exposed the real limitations of GIFCT’s skeletal consortium—a shoestring operation with no permanent staff, no shared location, and minimal technological and policy collaboration between the companies.  They oversell the capabilities of artificial intelligence, they undersell the nature of the challenge, and they obscure the amount of resources they’re devoting to fighting online terror content.

 

He made several recommendations: “First, GIFCT must have permanent staff who serve as dedicated points of contact for the companies and law enforcement. Further, moving its operations into a shared physical location, as is done in the military and intelligence community, could help companies stay ahead of online terrorist activity.”

 

Second, GIFCT must develop industry standards on terrorist content removal. How long is too long for terrorist content to remain live online? What error rates are acceptable from machine learning tools targeted at taking down terrorist content? How quickly are users’ reports of terrorist content handled? Having clear standards will not only help social media companies advance the ball together, but also enable lawmakers and the public understand how well the companies are handling terrorist content.

 

Third, GIFCT must explore opportunities for cooperation beyond simply maintaining a collective database of digital fingerprints that help the companies identify terrorist images and videos after they’ve already gone live. In order to build a truly robust operation, the companies must consider sharing terrorism-related artificial intelligence training data and other technologies between themselves and with smaller, less-resourced social media companies that must have a seat at the table as well.

 

Given its expanding capacities, it was announced at the UN General Assembly 23 September 2019 that GIFCT will transform into an independent NGO. In June, the first executive director of the NGO—Nicholas J. Rasmussen, former director of the US’s National Counterterrorism Center—was announced, along with a multi-sector, international, Independent Advisory Committee (IAC). Meanwhile, the IAC will serve as a governing body tasked with counselling on GIFCT priorities, assessing performance and providing strategic expertise. The IAC is made up of representatives from seven governments, two international organisations, and 12 members of civil society, including counter terrorism- and countering violent extremism experts; digital, free expression and human rights advocates; academics and others.

 

Until now, the Global Internet Forum to Counter Terrorism’s (GIFCT) database has focused on videos and images from terrorist groups on a United Nations list and so has largely consisted of content from Islamist extremist organizations such as Islamic State, al Qaeda and the Taliban. In July 2021 it was reported that over the next few months, the group will add attacker manifestos – often shared by sympathizers after white supremacist violence – and other publications and links flagged by U.N. initiative Tech Against Terrorism. It will use lists from intelligence-sharing group Five Eyes, adding URLs and PDFs from more groups, including the Proud Boys, the Three Percenters and neo-Nazis.

 

The tech group wants to combat a wider range of threats, said GIFCT’s Executive Director Nicholas Rasmussen in an interview with Reuters. “Anyone looking at the terrorism or extremism landscape has to appreciate that there are other parts… that are demanding attention right now,” Rasmussen said, citing the threats of far-right or racially motivated violent extremism.

 

Artificial intelligence  to deter and remove terrorist propaganda online

”One of the technologies that the group is likely to share is artificial intelligence, something Facebook recently said it believed would be key to tackling the rise of hate speech and terrorist recruitment online. In April 2018, Facebook’s Mark Zuckerberg appeared before the United States Congress  outlined ways that Facebook was tackling the problem, including using AI to proactively identify content and investing in larger review teams to monitor and remove content that could inflame and inspire violence.

 

The social media platform, which is used by billions of people around the world, employs thousands of people to check posts and has a dedicated counter-terrorism team. “Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training,” it said. YouTube has increased its efforts to remove content promoting terrorism, and has committed to hiring 10,000 people to help to enforce tough new standards. It reported in April that it had removed 8.3 million videos in three months.

 

Twitter added it works with law-enforcement agencies when appropriate and partners with groups that work to counter extremist content online. In a blog post, the company said its actions went beyond the account suspensions. “We have increased the size of the teams that review reports, reducing our response time significantly. We also look into other accounts similar to those reported and leverage proprietary spam-fighting tools to surface other potentially violating accounts for review by our agents. We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter.”

 

Artificial Intelliegence  and data analytics, tools have become critical in successfully preventing crime. Many police forces are already trialling forms of ‘predictive policing’, largely to forecast where there is a high risk of ‘traditional’ crimes like burglary happening, and plan officers’ patrol patterns accordingly, says UK’s Modern Crime Prevention Strategy. Data analytics can be used to identify vulnerable people, and to ensure potential victims are identified quickly and consistently. These tools can also be used  to pinpoint and monitor pathways to radicalization, stop the spread of terrorist propaganda and better identify individuals being radicalized.

 

Speed is the most important factor when dealing with such content, says Hannah Johnson, managing director at Blue State Digital, an online content agency that works with NGOs and came to global prominence after working on Barack Obama’s 2008 presidential campaign. Johnson said: “When it comes to moments of crisis or where content of a certain nature appears, timing is crucial. We’ve found that the first 72 hours can have a huge impact on how far it travels and wider public perception.”

 

Platforms such as Facebook and Twitter are now using technologies such as machine learning to “flag” terrorist content more rapidly. Facebook reports that 99 per cent of content relating to Daesh or Al-Qaeda is taken down automatically before users even have a chance to flag the content to the company. And it’s working. The platform says it removes 83 per cent of such content within the hour. Facebook has removed 14.3 million pieces of terrorist content in 2018.

 

As the company said in a recent blog post: “The improvements we’ve made to our technical tools have allowed for continued and sustained progress.” “At Facebook, we recognize the importance of keeping people safe, and we use technology and our counterterrorism team to do it,” said Monika Bickert, vice president of global policy management and Brian Fishman, global head of counterterrorism policy in a statement this year.

 

“Facebook policy prohibits terrorists from using our service, but it isn’t enough to just have a policy. We need to enforce it. Our newest detection technology focuses on Daesh, al-Qaeda, and their affiliates — the groups that currently pose the broadest global threat.

 

Facebook has started using the artificial intelligence programmes it uses to deter and remove terrorist propaganda online after the platform was criticised for not doing enough to tackle extremism. In a landmark post titled “hard questions”, Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager explained Facebook has been developing artificial intelligence to detect terror videos and messages before they are posted live and preventing them from appearing on the site.

 

“When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site.

 

“We have also recently started to experiment with using AI to understand text that might be advocating for terrorism.” Facebook also detailed how it is working with other platforms, clamping down on accounts being re-activated by people who have previously been banned from the site and identifying and removing clusters of terror supporters online.

 

Twitter says that 75 per cent of terrorism-linked accounts are removed even before posting their first tweet. It has also announced that it suspended 1.2 million suspected terrorist accounts in 2018, with a drop of 8.4 per cent from the previous year, which it hailed as evidence of the success in its campaign. The platform said at the time: “Twitter has a zero tolerance approach to terrorism. We’ve listened, learned, and vastly improved how we use our technological capabilities to tackle terrorist content. Now, 95 per cent of terrorist content is removed proactively through our technology.”

 

 

 

 

Algorithms, AI alone can’t help Facebook tackle online extremism

Deploying Artificial Intelligence (AI) for counterterrorism is not as simple as flipping a switch. Depending on the technique, you need to carefully curate databases or have human beings code data to train a machine. A system designed to find content from one terrorist organisation may not work for another because of language and stylistic differences in their propaganda. However, the use of AI and other automation to stop the spread of terrorist content is showing promise. As discussed in our most recent Community Standards Enforcement Report, in just the first three months of 2020, we removed 6.3 million pieces of terrorist content, with a proactive detection rate of 99 percent.

 

There is no one tool or algorithm to stop terrorism and violent extremism online. Instead, we use a range of tools to address different aspects of how we see dangerous content manifest on our platforms, writes Erin saltman for facebook. Some examples of the tooling and AI we use to proactively detect terrorist and violent extremist content includes:

 

Some examples of the tooling and AI we use to proactively detect terrorist and violent extremist content includes:

  • Image and video matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, for instance, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.
  • Language understanding: We have used AI to understand text that might be advocating for terrorism. This is language and often broad group-type specific.
  • Removing terrorist clusters:We know from studies of terrorists that they tend to radicalise and operate in clusters. This offline trend is reflected online as well. So, when we identify pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
  • Recidivism: We are now much faster at detecting new accounts created by repeat offenders (people who have already been blocked from Facebook for previous violations). Through this work, we have been able to dramatically reduce the time that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We are constantly identifying new ways that terrorist actors try to circumvent our systems, and we update our tactics accordingly.

 

The use of AI against terrorism is increasingly bearing fruit, but ultimately it must be reinforced with manual review from trained experts. To that end, we utilise expertise from inside the company and from the outside, partnering with those who can help address extremism across the internet.

 

While some overtly violating content can be removed directly with automation, the technology and AI is also programmed to triage a large amount of content to our human review and subject matter expert teams. More tech solutions do not mean less human involvement. Often it is the opposite. Human expertise is needed for nuanced understanding of language, detecting new trends and reviewing content that is not obviously violating. Along with increased industry collaboration, we continue to deepen our bench of internal specialists— including linguists, subject matter experts, academics, former law enforcement personnel and former intelligence analysts. We now have 350 people working full time on our dangerous organisation teams. This includes full time support for policy, engineering, operations, investigations, risk and response teams. This is supplemented by over 35,000 people in our safety and security teams around the world that assist with everything from translation to escalations. These teams have regional expertise in understanding the nuanced existence of terrorist groups around the world and also help us build stronger relationships with experts outside the company who can help us identify regional trends and adversarial shifts in how terror groups are attempting to use the internet.

 

Ultimately this is about finding the right balance between technology, human expertise and partnerships. Technology helps us manage the scale and speed of online content. Human expertise is needed for nuanced understanding of how terrorism and violent extremism manifests around the world and track adversarial shifts. Partnerships allow us to see beyond trends on our own platform, better understand the interplay between online and offline, and build programmes with credible civil society organisations to support counterspeech at scale, writes Erin saltman for facebook.

 

Despite Facebook’s increasing efforts, we know that countering terrorism and violent extremism effectively is ever evolving and cannot be done alone. The nature of the threat is both cross-platform and transnational. That is why partnerships with other technology companies and other sectors will always be key.

 

 

 

References and Resources also include:

https://www.siliconrepublic.com/companies/twitter-facebook-anti-terrorism-unit

https://link.springer.com/article/10.1007/s12115-017-0114-0

https://www.theatlantic.com/international/archive/2016/02/twitter-isis/460269/

http://www.philly.com/philly/opinion/commentary/terror-attack-sayfullo-saipov-manhattan-technology-artificial-intelligence-radicalization-20171103.html

https://www.cnet.com/news/homeland-security-chief-kirstjen-nielsen-terrorists-find-new-ways-to-recruit-online/

https://www.telegraph.co.uk/technology/information-age/hate-speech-and-terrorism-on-social-media/

https://thehill.com/blogs/congress-blog/technology/454342-social-media-companies-are-failing-to-stop-the-spread-of

https://www.lawfareblog.com/challenges-combating-terrorism-and-extremism-online

About Rajesh Uppal

Check Also

Hypersonic scramjet propulsion enable hypersonic platforms and weapons, require overcoming challenges through new technologies

Countries are racing to develop hypersonic platforms and weapons having speeds between Mach 5 ( …

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!