The Not so Ominous Future of Computer System Defense - A Brief Recap
At the conclusion of BSides Charlotte 2019, I realized that I would need to provide a bit more information in regards to the content that was covered in my slides, especially for those who were unable to attend. The purpose of this post is to highlight those ideas relative to the presentation and to get any feedback from those interested in this the area of Research. You can find my slides here.
How can one give an overview of the advancements in Computer System's Defense and what is the scope of the systems that we are trying to defend?
Initially, I thought large enterprise networks or data centers were the only areas where the composition of system and network design would require a drastic reduction in complexity by increasing autonomy, so I sought to find solutions fitting into that scope. Realistically, the optimal goal of systems defense should exhibit the following features:
- Respond at the moment of detection, which if automated will allow defenders to focus on building detections that are abnormal when compared to baseline behavior
- Respond optimally and in a way that does not decrease the integrity of one's environment. The response in this case is envisioned to be deployed by the system
- Increase the cost of attacking the network.
- Insure that all of the resources within the environment are receiving some means of protection from suite of defense implementations
Contemporary system defense is composed of software solutions that protect network and software resources. Some of those solutions require the need for constant maintenance or care whereas others simply require human action when an alert is present. Current advancements of defense techniques such as robust moving target defense, active defense, or automated network management require defends to spend hours configuring custom solutions prior to deployment while also demanding tender love and care.
How can manage these systems and reduce complexity?
Note: I am not questioning the merit of these technologies. I am simply asking questions about our approachs to make them usable for our solutions
- Should we continue to throw deep or machine learning into all of our products in hopes that it makes our products more autonomous and effective for responding?
- Should we deploy everything to the cloud and hope the provider manages the security of applications and infrastructure for us?
- Should we set up a proof of concept block chain that is inevitably ready for deployment to enable the verification of files, users, or hosts in our systems?
- Should we just configure everything to utilize containers or some light virtualization to enable higher levels of efficiency in automation.
Or what if we put most if not all of those solutions together intelligently?
One may consider Software Defense Networks, SecOps, Automation, Immutable Infrastructure or the D.I.E design strategy, to be bleeding edge technologies and methodologies that are leading the advancements of Cyber Defense strategies, but what if that wasn't the entire truth?
Insert the idea of Autonomic
After looking for relevant work of security architecture and design, I came across an idea presented in 2001 by IBM and a DARPA Funded project that described Autonomic Systems. Similarly to neural networks, the idea of autonomic systems was inspired by the autonomic nervous system. The general idea is to create an environment that is self-x, meaning self-healing, self-diagnosing, self-optimizing, self-"aware" or state aware and able to manage itself with minimal intervention. This idea creating an environment that is able to adapt and protect itself seemed to be what many advancements are driving towards. This can be seem in technologies like Splunk's Phantom which is conditionally reactive to Oracle's Autonomous Database
Creating the perfect feedback loop
The most important component of any autonomic system are feedback loops, they drive the behavior of the system. This aspect of autonomic systems, I predict, will be nearly impossible to automate based on how creative attackers are and how effective learning algorithms are at constructing baseline for behavior. Wisdom I gleaned from a research paper specifically mentions the ineffectiveness of applying machine or deep learning to this concept without building a framework for the learning to be built around.
So what does this mean?
It means that autonomic systems are truly on the horizon and this advancement is not stopping any time soon. It is also important for security minded individuals to know what is coming so that they are aware of the benefits of true autonomic systems which can potentially boost the productivity and maturity of implementing businesses.
Currently, I am working on a proof of concept system that utilizes SaltStack's reactor, SDN technologies and LXD containers. After speaking at the conference, I realized that there is a need in the industry to be able to test the marketed "autonomic feature x". So I plan to use my proof of concept and any access I am given to vendor solutions to test how well these features are implemented.