This weekend, I worked on a pentest report that was already pending for a while. I'm honest: I'm lazzy to write reports (like many of us, no?). During a pentest, it is mandatory to keep evidences of all your findings. No only the tools you used and how you used them but as much details as possible (screenshots, logs, videos, papers, etc). Every day, we had a quick debriefing meeting with the customer to make the point about the new findings. The first feedback was often a "Yes, but...": Me: "We were able to connect a USB stick and to drop unwanted tools to the laptop!" Me: "We were able to find sensitive documents on an admin's desktop!" Me: "We were able to connect to remote servers using the same credentials!" I could write tons of examples like these! When you're engaged in a pentest, it is executed within a defined scope at a time "t". If your targeted infrastructure is not ready yet, postpone the project. And if you think to be ready, accept to be compromised! When speaking to customers, I like to compare a pentest to a plane crash. If a plane crash result often in many dead people, planes can be considered as safe. Statistically, flying is less dangerous than driving to the airport with your car! Modern planes are very reliable: critical components are redundant, strong procedures and controls are in place. Planes are designed to fly under degraded mode. The cabin crew is also trained to handle such situations. Finally, planes maintenance are scheduled at regular interval. How to explain that, from time to time, a plane crash occurs? Most of the time, post-crash investigations reveal that the crash is the result of a suite of small or negligible issues which, taken out of context, are not critical. But, a small incident might introduce a second one, then a third one, etc. If a suite of such small incidents occur, nasty things may happen up to... a crash! This is called the “butterfly affect” which describes how a small change in a deterministic nonlinear system can result in large differences in a later state (definition by Wikipedia). Keep in mind that the same may occur during a pentest. A small issue in a configuration file associated to files left in a public directory, an unpatched system and a lack of security awareness of the operators might result in a complete compromisation of your infrastructure. Avoid "Yes, but..." comments and take appropriate action to solve the issues. Xavier Mertens |
Xme 697 Posts ISC Handler Oct 27th 2015 |
Thread locked Subscribe |
Oct 27th 2015 6 years ago |
I agree with most of your post. I disagree with your description of plane crashes. Commercial aviation plane crashes are due to 70% pilot error and 30% maintenance error approximately. This does not include design defects. Plane crashes are usually deterministic linear systems. Event A leads to event B leads to event C leads to the crash. If you interrupt the chain, you would avoid the crash. The NTSB and other civil and military investigative boards usually can determine the cause of the crash and the sequence of events leading to the crash, then advise how to avoid similar crashes in the future. If the crashes were the effect of a chaotic system, then the cause could not be determined. Likewise, you can determine cause and effect in your pentesting due to procedural or operational lapses because human processes, humans themselves, and machine systems are generally linear, deterministic systems overall.
John jbmoore61@gmail.com |
jbmoore 11 Posts |
Quote |
Oct 27th 2015 6 years ago |
Thank you for the clarification John. Your knowledge of planes security is better then mine. Note that the ratio 70/30 between pilot / maintenance could also apply to infosec and humans.
|
Xme 697 Posts ISC Handler |
Quote |
Oct 27th 2015 6 years ago |
Only tangentially related: this reminds me of a class in officer training school where we discussed the 1994 Blackhawk friendly fire incident (https://en.wikipedia.org/wiki/1994_Black_Hawk_shootdown_incident) It was striking the number of errors, failures or miscalculations that had to occur for that shooting to take place. Had any one of them not occurred, the shooting wouldn't have happened. It's decent, albeit grim, analogy for defense in depth.
|
Juice 12 Posts |
Quote |
Oct 27th 2015 6 years ago |
Xavier,
Your post brought back some excellent ""Yes, but..." moments. Russell |
Russell 100 Posts ISC Handler |
Quote |
Oct 27th 2015 6 years ago |
John, greetings! I can't speak for today, but in the USAF of 20 years ago it was generally accepted by maintenance staff (of whom I was one) that 80% of all aircraft failures were traced to maintenance causes, and only 20% to pilot error. I am very interested to note that things are so very different in commercial aviation. Thank you for providing this information. Otherwise, I agree completely with Xavier's statements - that if you claim you are ready for a pen-test, and the pen-test reveals deficiencies in your work, you should be GRATEFUL that the pen-tester did a good job and found your problem areas, and notified you so that you can fix them. We are all just human (see sentence #2 above), and when we can get help and guidance in becoming better, that is a good thing.
Jim |
Jim 1 Posts |
Quote |
Oct 28th 2015 6 years ago |
Sign Up for Free or Log In to start participating in the conversation!