October 24, 2023
Report

Increased Interpretability for Model-Driven Deception: MARS LDRD Project

Abstract

Machine learning has been proposed as a solution to several cybersecurity solutions and one of the most promising applications is for digital twins for intrusion detection and driving deceptive defense. However, machine learning techniques often result in a black-box function that is difficult for end users to interpret which for deception limits their ability to effectively define decoys. In this report, an approach to validate the equations learned are accurate is provided and demonstrated. Following, begins the process of addressing this issue for a model-driven deception technology that produces equations representing the physical process controlled by operation technology devices. This research was performed by applying subject matter expert context to machine learned models.

Published: October 24, 2023

Citation

Hofer W.J., O. Bel, B. Hyder, M.W. Bruggeman, and T.W. Edgar. 2023. Increased Interpretability for Model-Driven Deception: MARS LDRD Project Richland, WA: Pacific Northwest National Laboratory.