Scalable Computing and Analytic Methods in Action

Featured Research

  • Tracking News Manipulation by Malicious State Actors

    During the pandemic, both Russia and China used authoritarian power over the media to manipulate the news. The authors developed a proof-of-concept tool using natural language processing and machine learning to help to better detect such propaganda campaigns—and guard against them in the future.

    Nov 15, 2021

  • Machine Learning Can Detect Online Conspiracy Theories

    As social media platforms work to prevent malicious or harmful uses of their services, an improved model of machine-learning technology can detect and understand conspiracy theory language. Insights from this modeling effort can help counter the effects of online conspiracies.

    Apr 29, 2021

  • Student Helps Develop Method to Detect Subversive Social Media Campaigns

    The U.S. has a capability gap in detecting malign or subversive information campaigns in time to respond before they influence large audiences. Krystyna Marcinek (cohort '17) helped develop a novel method to detect these efforts.

    Mar 16, 2020

  • Thinking Machines Will Change Future Warfare

    Until now, deterrence has been about humans trying to dissuade other humans from doing something. But what if the thinking is done by AI and autonomous systems? A wargame explored what happens to deterrence when decisions can be made at machine speeds and when states can put fewer human lives at risk.

    Jan 27, 2020

  • How Well Is DoD Positioned for AI?

    The U.S. Department of Defense has articulated an ambitious vision and strategy for artificial intelligence. But if it wants to get the maximum benefit from AI-enhanced systems, then it will need to improve its posture along multiple dimensions.

    Dec 17, 2019

  • The Emerging Risk of Virtual Societal Warfare

    Living in an information society opens unprecedented opportunities for hostile rivals to cause disruption, delay, inefficiency, and harm. Social manipulation techniques are evolving beyond disinformation and cyberattacks on infrastructure sites. How can democracies protect themselves?

    Oct 9, 2019

  • Addressing the Challenges of Algorithmic Equity

    Social institutions increasingly use algorithms for decisionmaking purposes. How do different perspectives on equity or fairness inform the use of algorithms in the context of auto insurance pricing, job recruitment, and criminal justice?

    Jul 11, 2019

  • A World-Building Workshop on the Future of Artificial Intelligence

    How might artificial intelligence (AI) be used to shape a new world? A workshop engaged a group of innovative thinkers in an approach called large-scale speculative design to sketch desirable, AI-enabled future worlds.

    Jun 21, 2019

  • Artificial Intelligence Applications to Support Teachers

    Artificial intelligence could support teachers rather than replace them. But before the promise of AI in the classroom can be realized, risks and technical challenges must be addressed.

    Jan 23, 2019

  • Using Video Analytics and Sensor Fusion in Law Enforcement

    Results of a workshop examining the issues associated with use of video technology in law enforcement, including business cases for use, key innovation needs, and privacy and civil rights protections.

    Dec 28, 2018

  • How Might Artificial Intelligence Affect the Risk of Nuclear War?

    Experts agree that AI has significant potential to upset the foundations of nuclear security. But there are also ways that machines could help ease distrust among international powers and decrease the risk of nuclear war.

    Apr 24, 2018

  • The Risks of AI to Security and the Future of Work

    As artificial intelligence (AI) becomes more prevalent in the domains of security and employment, what are the policy implications? What effects might AI have on cybersecurity, criminal and civil justice, and labor market patterns?

    Dec 6, 2017

  • The Risks of Bias and Errors in Artificial Intelligence

    Machine learning algorithms and artificial intelligence (AI) influence many aspects of life today. These agents are not exempt from errors or bias because they are designed, built, and taught by humans. While AI has great promise, using it introduces a new level of risk and complexity in policy.

    Apr 6, 2017

  • Using High-Performance Computing to Support Water Resource Planning

    Researchers from RAND and the Lawrence Livermore National Laboratory used high-performance computer simulations to stress-test several water management strategies over many plausible future scenarios in near real time.

    Aug 25, 2016

  • Examining ISIS Support and Opposition on Twitter

    ISIS uses Twitter to inspire followers, recruit fighters, and spread its message. Its opponents use Twitter to denounce the group. To identify and characterize in detail both networks on Twitter, researchers use a mixed-methods analytic approach that draws on community detection algorithms to help detect interactive communities of Twitter users, lexical analysis that can identify key themes and content for large data sets, and social network analysis.

    Aug 16, 2016