I don’t actually run this SOC (or any other) 🙂 But…but, as a certified “blue team” member, I’m pretty excited with the crop of new companies and ideas that are springing up in the area of SOC analysis, Deception technology, Lateral/external movement, etc. Some of the cool new(ish) vendors that I am falling deeply in love with will be briefly enumerated below…If I was running a SOC, here are some vendors (or technologies) that I would have to add on top of the existing players (centralized logging, scan vuln data, IDS/IPS data, firewall/proxy data, etc.)
“The Florentine Deception”, Carey Nachenberg, 2015, 978-1-5040-0924-9,
%A Carey Nachenberg http://florentinedeception.com
%C 345 Hudson Street, New York, NY 10014
%G 978-1-5040-0924-9 150400924X
%I Open Road Distribution
%O U$13.49/C$18.91 www.openroadmedia.com
%O Audience n+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P 321 p.
%T “The Florentine Deception”
It gets depressing, after a while. When you review a bunch of books on the basis of the quality of the technical information, books of fiction are disappointing. No author seems interested in making sure that the technology is in any way realistic. For every John Camp, who pays attention to the facts, there are a dozen Dan Browns who just make it up as they go along. For every Toni Dwiggins, who knows what she is talking about, there are a hundred who don’t.
So, when someone like Carey Nachenberg, who actually works in malware research, decides to write a story using malicious software as a major plot device, you have to be interested. (And besides, both Mikko Hypponen and Eugene Spafford, who know what they are talking about, say it is technically accurate.)
I will definitely grant that the overall “attack” is technically sound. The forensics and anti-forensics makes sense. I can even see young geeks with more dollars than sense continuing to play “Nancy Drew” in the face of mounting odds and attackers. That a vulnerability can continue to go undetected for more than a decade would ordinarily raise a red flag, but Nachenberg’s premise is realistic (especially since I know of a vulnerability at that very company that went unfixed for seven years after they had been warned about it). That a geek goes rock-climbing with a supermodel we can put down to poetic licence (although it may increase the licence rates). I can’t find any flaws in the denouement.
But. I *cannot* believe that, in this day and age, *anyone* with a background in malware research would knowingly stick a thumb/jump/flash/USB drive labelled “Florentine Controller” into his, her, or its computer. (This really isn’t an objection: it would only take a couple of pages to have someone run up a test to make sure the thing was safe, but …)
Other than that, it’s a joy to read. It’s a decent thriller, with some breaks to make it relaxing rather than exhausting (too much “one damn thing after another” gets tiring), good dialogue, and sympathetic characters. The fact that you can trust the technology aids in the “willing suspension of disbelief.”
While it doesn’t make any difference to the quality of the book, I should mention that Carey is donating all author profits from sales of the book to charity:
copyright, Robert M. Slade 2015 BKFLODEC.RVW 20150609
A recent story (actually based on one from several years ago) has pointed out that, for years, the launch codes for nuclear missiles were all set to 00000000. (Not quite true: a safety lock was set that way.)
Besides the thrill value of the headline, there is an important point buried in the story. Security policies, rules, and procedures are usually developed for a reason. In this case, given the importance of nuclear weapons, there is a very real risk from a disgruntled insider, or even simple error. The safety lock was added to the system in order to reduce that risk. And immediately circumvented by people who didn’t think it necessary.
I used to get asked, a lot, for help with malware infestations, by friends and family. I don’t get asked much anymore. I’ve given them simple advice on how to reduce the risk. Some have taken that advice, and don;t get hit. A large number of others don’t ask because they know I will ask if they’ve followed the advice, and they haven’t.
Security rules are usually developed for a reason, after a fair amount of thought. This means you don’t have to know about security, you just have to follow the rules. You may not know the reason, but the rules are actually there to keep you safe. It’s a good idea to follow them.
(There is a second point to make here, addressed not to the general public but to the professional security crowd. Put the thought in when you make the rules. Don’t make stupid rules just for the sake of rules. That encourages people to break the stupid rules. And the necessity of breaking the stupid rules encourages people to break all the rules …)
(We have the OT indicator to say that something is off topic. This isn’t, because ethics and sociology is part of our profession, but it is a fairly narrow area of interest for most. We don’t have a subject-line indicator for that 🙂
This article, and the associated paper, are extremely interesting in many respects. The challenge to whole fields of social factors (which are vital to proper management of security) has to be addressed. We are undoubtedly designing systems based on a fundamentally flawed understanding of the one constant factor in our systems: people.
(I suppose that, as long as the only people we interact with are WEIRD  westerners, we are OK. Maybe this is why we are flipping out at the thought of China?)
(I was particularly interested in the effects of culture on actual physical perception, which we have been taught is hard wired.)
 – WEIRD, in the context of the paper, stands for Western, Educated, Industrialized, Rich, and Democratic societies