Getting Incident Response Help from Richard Feynman
Richard Feynman was kind of a big deal (https://en.wikipedia.org/wiki/Richard_Feynman) and known for being a "Great Explainer"1 One of the many things that he is know for, is his explanation technique.
The Feynman Technique (https://collegeinfogeek.com/feynman-technique/) is used a lot as a study technique but is also has application in your Incident Response and Investigation processes. It breaks down something like this:
- Grab a sheet of paper, or open up a text editor, at the top put your concept or idea that you want to explain.
- Describe the topic in your own words as if you're trying to teach someone about it. Use plain language as if you're writing for a non-expert. Use examples to teach to.
- Read your work and identify the areas where your explanation is lacking or where out-right gaps exist. Use them to refocus your study.
- Eliminate jargon and technical shortcuts. Make sure that it all would make sense when read by someone who don't know what you know.
It doesn't take a lot of modification to apply this to writing incident narratives. Keep your audience in mind, because narratives aren't read just by your fellow incident handlers but also by upper management, decision makers in IT, and general users. This is why step four is so important.
The technique can also help focus and drive an investigation. It can be used to help answer the crucial "are we finished with our investigation" question.
For example, a narrative in ticket might read like:
or
These kinds of events happen several times a day in many environments. Pretty simple, right? But are they simple and complete enough? Considering the second example, it's going to have a copy of the email and headers attached to the ticket and probably some sort of dynamic-analysis report of the malicious URL. If I have to write that kind of narrative 50 times a day I might be happy with that narrative. But I can do better.
Imagining that I present that narrative to the Quality Assurance manager in my head, I hear the following conversation:
QA-me: "Did they click the link?"
IR-me: "They claim that they did not, it's in the attached email."
QA-me: "Okay, add that to the narrative, but did they REALLY not click that link? Did you check the logs? 'Trust but verify.'"
So, I update my narrative, and go back to check the web logs. It takes a few minutes to determine that they didn't click the link, but sadly a few other's did. Drat this just got a lot messier.
Think of what other questions the hypothetical QA manager would ask. Iterate until the voices in your head are happy.
Sometimes you're not going to have an answer. For example, consider a case where an employee has installed some unwanted software on one of your servers. You might not get a good answer out of them for why they did it, but you stand a pretty good chance of know when in installed, and how it happened. You might even know a bit about when it was used, but you might be missing monitoring or controls that wouldn't allow you to know where it connected to or what it was being used for.
"We don't know" is valid in a narrative as long as you can explain why you don't know. Sometimes exploring why you don't know gives you an idea to actually find out.
Paraphrasing slightly: If you can't explain what happened, then you probably don't know what happened. A good narrative with a high-level time-line and network diagram for your "interesting" incidents is an important product of your Incident Response process. That is, if you fancy not working the same incident over and over.
1 LeVine, Harry (2009). The Great Explainer: The Story of Richard Feynman.
Comments