8:00 am – 8:45 am | Registration and mutual introduction |
8:45 am – 9:00 am | Welcome notes (Cliff Wang, Army Research Office) |
9:00 am – 10:30 am |
Session 1 (Chair: Gang Qu, University of Maryland):
Human Behavior and Psychology in Cyber Deception
- The Psychology of Deception in the Cyber World, Cleotilde (Coty) Gonzalez, CMU • Details
Cyber attacks fundamentally occur by taking advantage of the power of deception. Deception involves intentionally misleading an agent into believing that something is safe or true when it is not, in order to get an advantage over that agent. Deception is an important topic of research in psychological research and much is known about the cues to detection of deception, individual differences on deception and other themes. Deception is also well known as a political and military strategy; in fact, the "principles of war" rely on the advantages of "surprise", fooling the opponent and not being fooled yourself. While there is considerable amount of work in the Psychology of Deception, most of what we know about the psychology of deception relies on the physical observation of behavior. Many relevant insights can be brought out from the knowledge of deception in the physical to the cyber world; fundamentally, effective deception is often based on the exploitation of the victim's cognitive assumptions. However, the cues of deception in the cyber world are very different from those in the physical world, and little if anything is known about deception in the cyber world. In a project recently funded by the Army Research Office, we will study the psychology of the process of deception in the cyber world. We will investigate how humans may generate deceiving strategies of cyber attacks, and how deception may be detected and counteracted by humans to keep their information safe. We will concentrate on the cognitive and social aspects of human behavior to help to develop effective defenses for future attacks. A basic science of deception from the human perspective will inform the development of cognitive models that can be used to detect and combat cyber attacks. In this talk I will discuss the power of psychological strategies as well as some of the empirical and modeling work we have started to conduct.
- Strategic Deceptive Signaling and Human Behavior Modeling in Security Games, Milind Tambe, USC • Details
A major aspect of improving cybersecurity is to understand the interactions of humans in the loop — defenders trying to protect cyber systems, users trying to use the cyber systems, and attackers trying to attack. To that end, our overarching research objective is to lay a game theoretic foundation for a new human science of cybersecurity. We will build on our foundational work in building the "security games" framework – based on computational game theory, while also incorporating elements of human behavior modeling, AI planning under uncertainty and machine learning – that has led to building and deployment of decision aids for security agencies in the US and around the world. We will emphasize two key aspects of this research: strategic (deceptive) signaling and modeling human bounded rationality. With respect to strategic signaling, we propose a two-stage security game model, where in the first stage the defender allocates resources, and in the second stage the defender strategically reveals (deceptive) local information about that target, potentially deterring the attacker's attack plan. With respect to human behavior modeling, we present a model that (i) reasons based on success or failure of the adversary's past actions on exposed portions of the attack surface to model adversary adaptiveness, and reasons about similarity between exposed and unexposed areas of the attack surface; and (ii) integrates a non-linear probability weighting function that contradicts well accepted results on human probability weighting.
- Authentication Behaviors and Deceptions, George Cybenko, Dartmouth University • Details
Many cyber attacks today start with credentials that attackers obtain through non-technical means such as social engineering and phishing. Those credentials are then used to aggressively explore and exploit discovered resources to achieve the attacker's goals. Because the credentials are valid, the attack is based on a simple masquerading deception. The detection of the deception must be based on temporal and structural analyses of the authentication logs. We will discuss the state of the art for this problem and how novel approaches might detect and mitigate such attacks in the future.
|
10:30 am – 11:00 am | Break and networking (Light refreshment and beverage) |
11:00 am – 12:30 pm |
Session 2 (Chair: Ahmad-Reza Sadeghi, TU Darmstadt):
Case Studies and Practice in Cyber Deception and Defenses
- Design Considerations and Lessons Learned for building cyber deception systems, Greg Briskin and Jason Li, IAI • Details • Slides
Cyber defenders and mission commanders can use cyber deception as an effective means for protecting mission cyber assets and ensuring mission success, through deceiving and diverting adversaries during the course of planning and execution of cyber operations and missions. A number of cyber deception schemes depend learning adversarial intent, capability and methods. This requires continuous direct and indirect engagement with an attacker, which is often hard and sometimes impossible. This presentation will share our experiences developing deception techniques that do not require detection of an ongoing cyber-attack. With our approach, the range of possible proactive deception scenarios becomes the product of the security policies set for a given network site. The deception scenarios can be automatically triggered based on monitoring and detecting of the behavior (not necessary malicious) that is not compatible with the current security policy (i.e. performing the action beyond the scope of the assigned authority). Such approach allows for the selected cyber deception techniques to co-exist with normal user operations. The presentation discusses various aspects to consider when creating a deception scenario to manipulate believability while creating and supporting verifiable deception story by providing consistency of discovery across various possible network channels and conduits of fictitious information disclosure. The combinations of deception elements exposed to an attacker are constructed to be perceived as pieces of a puzzle that, when being put together, relays (partially) fictitious picture of an operational environment to match an attacker's cognitive bias. We will share our experiences in combining cyber deception with MTD techniques to "reveal the fiction and conceal the truth", aimed at leading adversaries into wrong directions and draining their resources. Such combination allows for multi-layered and multi-phase deception scenarios with deception units deployed throughout the network with coordination and synchronization of deception activity provided across the enterprise. We will also highlight some of the design and implementation challenges, such as minimizing effect on mission operations and protecting deception infrastructure from cyberattacks through hardening and stealth. The presentation will describe some of the novel techniques to create uncertainty, ambiguity, and inconsistency in network and system responses (protocol equivocation), which presents a viable alternative to entrapment techniques as it might keep an attacker's attention longer, and it is notes vulnerable to a "deception explosion". The authors will present the challenges and opportunities of designing flexible cyber deception systems, pros and cons of various design choices, and lessons learned in designing such deception systems with autonomous and man-on-the-loop C2 options.
- Cyber-deceptive Software Engineering, Kevin Hamlen, UT Dallas • Details
Security requirements for large, mission-critical software systems have historically focused on enforcing data integrity, confidentiality, and availability properties, but rarely deceptiveness. If attacker deception is considered as a requirement at all, it is typically only pursued for certain specialized software products, such as honeypots and firewalls. In this talk we will argue that robustness of large software systems to modern cyberattacks demands that deceptiveness become a pervasive security requirement of ALL software---even software whose primary purpose is to provide legitimate services rather than resist attacks. This practice will substantially raise risk and uncertainty for attackers, since their every ostensibly successful intrusion might actually be a ruse that misdirects them into divulging valuable TTPs that prepare defenders for their future attacks, or identifying information that expose attackers to compromise or capture. Moreover, we will show how recent advances in cloud computing, virtualization, and compiler-instrumented information flow tracking can be leveraged to realize pervasive deceptiveness properties for large categories of modern software at low cost to software developers, system administrators, and owners. Fully utilizing this new science of cyber-deceptive software engineering will entail scientific contributions from many additional disciplines, including data mining, natural language processing, risk analysis, game theory, psychology of social engineering, and economics of security. Ramifications and potential impact for several of these domains will be examined.
- Deception Security and Active Authentication, Sal Stolfo, Columbia University • Details
Encryption, Data Loss Prevention, Endpoint Detection and Response, User Behavior Analytics technologies all lead the markets in prevention of data loss, but fail to deliver effective solutions. It is clear new methods and techniques are needed to do a far better job at protecting data. The goal of our early work was to defend against data loss by a principled approach to integrating several security methodologies including deception and user de-authentication. In this talk we will provide a brief history of our work on the Deception Security and Active Authentication technology we developed, especially for mobile security.
|
12:30 pm – 1:30 pm | Lunch break and networking (Light refreshment and beverage) |
1:30 pm – 3:30 pm |
Session 3 (Chair: Jason Li, I-A-I):
Probabilistic Logic and Machine Learning for Cyber Deception
- A Probabilistic Logic of Cyber Deception, VS Subrahman, Dartmouth University • Details
Malicious attackers often scan nodes in a network in order to identify vulnerabilities that they may exploit as they traverse the network. In this talk, we describe a system that generates a mix of true and false answers in response to scan requests. If the attacker believes that all scan results are true, then he will be on a wrong path. If he believes some scan results are faked, he would have to expend time and effort in order to separate fact from fiction. We propose a Probabilistic Logic of Deception (PLD-Logic) and show that various computations are NP-hard. We model the attacker's state and show the effects of faked scan results. We then show how the defender can generate fake scan results in different states that minimize the damage that the attacker can produce. We develop a Naive-PLD algorithm and a Fast-PLD heuristic algorithm for the defender to use and show experimentally that the latter performs well in a fraction of the run-time of the former. We ran detailed experiments to assess the performance of these algorithms and further show that by running Fast-PLD offline and storing the results, we can very efficiently answer run-time scan requests. Joint work with S. Jajodia, N. Park, F. Pierazzi, A. Pugliese, E. Serra, and G. Simari.
- Characterizing and thwarting adversarial deep learning, Farinaz Koushanfar, UCSD • Details
This talk presents CuRTAIL, a novel end-to-end computing framework with tools for characterizing the assurance and thwarting adversarial space in the context of Deep Learning (DL). The framework protects deep neural networks against adversarial samples, which are perturbed inputs carefully crafted by malicious entities to mislead the underlying DL model. The precursor for the proposed methodology is a set of new quantitative metrics to assess the vulnerability of various deep learning architectures to adversarial samples. CuRTAIL formalizes the goal of preventing adversarial samples as a minimization of the space unexplored by the pertinent DL model that is characterized in CuRTAIL vulnerability analysis step. To thwart the adversarial machine learning attack, CuRTAIL introduces the concept of Modular Robust Redundancy (MRR) as a viable solution to achieve the formalized minimization objective. We extensively evaluate CuRTAIL against the state-of-the-art attack models. Proof-of-concept implementations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate CuRTAIL effectiveness in model assurance and detecting adversarial samples. We discuss open questions and future directions.
- Adversarial machine learning for cyber deception, Murat Kantarcioglu, UT Dallas • Details
Deception techniques could be used in cyber security to deceive attackers to spend resources and time in attacking fake targets. At the same time, sophisticated attackers may try to probe the underlying systems to detect whether they are real or fake. For example, a malware that infected a host may try to understand whether the machine is real host or an emulated environment. Therefore, the malware may be probing for understanding the system properties before it tries to infect the system. In this setting, we can model the probing attacker as a machine learning model that tries to classify the underlying system as real or fake using the data gathered from the probing, and the defender as a player who tries to force the machine learning model into mis-classification (i.e., classifying a fake system as real). Using this analogy, we will discuss our past work where tried to choose the best features that can resist adversarial attacks, and discuss how some of our adversarial machine learning related results could be leveraged to design deceptive systems that are hard to distinguish by the potential attackers.
- The Art and Science of Adversarial Classification, Eugene Vorobeychik, Vanderbilt • Details
The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to either evade the classifiers deployed to detect them, or degrade the quality of the data used to train the classifiers. In this talk I will discuss scientific foundations of classifier evasion modeling. A dominant paradigm in the machine learning community is to model evasion in “feature space”, that is, through direct manipulation of classifier features. In contrast, the cyber security community developed several “problem space” attacks, where actual instances (such as malware) are modified, and features are then extracted from the evasive instances. I will show through a case study of PDF malware detection that feature-space models can be a very poor proxy for problem space attacks. I will then demonstrate that, which this is the case, there exists a simple “fix”: to identify a small set of features which are invariant (conserved) with respect to an evasion attack, and constrain these features to remain unchanged in feature-space models. I will then show that such conserved features exist, cannot be inferred using standard regularization techniques, but can be automatically identified for a given problem-space evasion model.
|
3:30 pm – 3:45 pm | Break and networking (Light refreshment and beverage) |
3:45 pm – 5:00 pm | Session 4: group discussion |
5:00 pm – 5:15 pm | Concluding remarks |