REVIEW: “Identity Management: Concepts, Technologies, and Systems”, Elisa Bertino/Kenji Takahashi

BKIMCTAS.RVW   20110326

“Identity Management: Concepts, Technologies, and Systems”, Elisa
Bertino/Kenji Takahashi, 2011, 978-1-60807-039-8
%A   Elisa Bertino
%A   Kenji Takahashi
%C   685 Canton St., Norwood, MA   02062
%D   2011
%G   978-1-60807-039-8 1-60807-039-5
%I   Artech House/Horizon
%O   800-225-9977 fax: +1-617-769-6334
%O   Audience i- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   196 p.
%T   “Identity Management: Concepts, Technologies, and Systems”

Chapter one, the introduction, is a review of general identity related issues.  The definition of identity management, in chapter two, is thorough and detailed, covering the broad range of different types and uses of identities, the various loci of control, the identity lifecycle (in depth), and a very effective technical definition of privacy.  (The transactional attribute is perhaps defined too narrowly, as it could relate to non-commercial activities.)
“Fundamental technologies and processes” addresses credentials, PKI (Public Key Infrastructure), single sign-on, Kerberos, privacy, and anonymous systems in chapter three.  The level of detail varies: most of the material is specific with limited examples, while attribute federation is handled quite abstractly.  Chapter four turns to standards and systems, reviewing SAML (Security Assertion Markup Language), Web Services Framework, OpenID, Information Card-Based Identity Management (IC-IDM), interoperability, other prototypes, examples, and projects, with an odd digression into the fundamental confidentiality, integrity, and availability concepts.  Challenges are noted in chapter five, briefly examining usability, access control, privacy, trust management, interoperability (from the human, rather than machine, perspective, particularly expectations, experience, and jargon), and finally biometrics.

This book raises a number of important questions, and mentions many new areas of work and development.  For experienced security professionals needing to move into this area as a new field, it can serve as an introduction to the topics which need to be discussed.  Those looking for assistance with an identity management project will probably need to look elsewhere.

copyright, Robert M. Slade   2011     BKIMCTAS.RVW   20110326


Give someone enough rope …

Today a Conservative Canadian Senator made a rather bizarre suggestion about giving convicted murderers a rope, and allowing them to hang themselves.  (No, I’m not kidding.  But he later retracted the statement.)

But, never let it be said that we couldn’t look at ideas, regardless of how they come.  Moral repugnance aside, is this a good idea?  Probably not.

Would it save money?  Only if the murderer felt really, really sorry.  And, isn’t that what we wanted out of the justice system in the first place?  So, we might have saved money and wasted a life.

Then again, what if the convicted person was not guilty?  I would think that an innocent person, unjustly convicted, would be a prime candidate for suicide.  So then we have a monetary saving at the cost of an innocent life.

And, for those who really don’t feel bad about killing people, they might welcome the option of getting out of a life sentence.  So we may be reducing the deterrent effect if we implement the rope idea.

If we’ve got a real psychopath, is it really a good idea to give him a rope, or poison, or a knife, or a gun, or anything particularly dangerous?  It isn’t too hard to start to imagine scenarios where he/she/it could do some real damage, even within the prison.

Maybe we should chip in and buy the Senator a copy of Schneier’s “Liars and Outliers.”


Certified security awareness

A vendor speaking at a conference (is there any other kind of presentation at conferences these days?) has made a call for a new standard for information security awareness training.

” … the way to do this is via a new infosecurity standard that solely focuses on training and awareness and is delivered in the work environment”

Now, I’m all for security awareness.  I’m all for more security awareness.  I’m all for better security awareness.  I’m all for infosec departments to actually TRY security awareness (since they say often say, “well, if it was gonna have worked, it woulda worked by now” and never try it).

But, come on.  A new “standard”?

As the man[1] said, the wonderful thing about computer “standards” is that there are so many to choose from.

What are we going to certify?  Users?  “Sorry, you have been found to be too stupid to use a computer at work.  You are hereby issued this non-jailbroken iPad.”

No, undoubtedly he thinks we are going to “certify” the awareness materials themselves.  Good luck with that.

I’ve been a teacher for a lot of years.  I’ve also been a book reviewer for a lot of years.  And I’ve published books.  Trust me on this: a variant of Gresham’s Law is very active in the textbook and educational materials field.  Bad textbooks drive out good.  As a matter of fact, it’s even closer to Gresham: money drives out good textbooks and materials.  Publishers know there is a lot of money to be made in textbooks and training materials.  Publishers with a lot of money are going to use that money to advertise, create “exclusive” contracts, and otherwise ensure that they have the biggest share of the market.  The easiest way to do that is to publish as many titles as you can, as cheaply as you can.  “Cheaply” means you use contract writers, who can turn out 2-300 pages on anything, whether they know about it or not.

So, do you really think that, if someone starts making noise about a security awareness standard, the publishers won’t make absolutely certain that they’ve got control of the certification process?  That if someone comes up with an independent standard that they can withstand the financial pressures that large publishers can bring to bear?  That if someone creates an independent cert, and firmly holds to principles and standards, that the publishers won’t just create a competing cert, and advertise it much more than the independent cert can ever hope to?

After all, none of us can possibly think of any lousy security product with a lot of money behind it that can command a larger market share than a good, but independent, product, now can we?

[1] Well, maybe it was Andrew Tanenbaum, but maybe it was Grace Hopper.  Or Patricia Seybold.  Or Ken Olsen.


REVIEW: “Surviving Cyberwar”, Richard Stiennon

BKSRCYWR.RVW   20110325

“Surviving Cyberwar”, Richard Stiennon, 2010, 978-1-60590-688-1
%A   Richard Stiennon
%C   4501 Forbes Blvd, #200, Lanham, MD   20706
%D   2010
%G   978-1-60590-688-1 1-60590-674-3
%I   Government Institutes/Scarecrow Press/Rowman & Littlefield Publ.
%O   800-462-6420
%O   Audience n- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   180 p.
%T   “Surviving Cyberwar”

The introduction is the customarily (for books on currently “hot” topics) vague warning that there is danger out there.

Chapter one, according to the title, is supposed to talk about the “Titan Rain” attacks.  In reality it concentrates on Shawn Carpenter and his personal problems, and says very little either about details of the technology, or ideas for defence.  China, and various activities in espionage (and diplomatic disagreements with the US), is the topic of chapter two.  (One story is not about China.)  Although entitled “Countering Cyber Espionage,” chapter three is just about security tools and malware.  Chapter four lists random aspects of, and attacks on, email.  The Pentagon is dealt with, in similarly haphazard fashion, in chapter five.

A few wars, or tense “situations,” are mentioned in chapter six, along with some possibly related computer involvement.  Chapter seven titularly promises DDoS defence, but mostly just talks about distributed denial of service attacks, along with a mention of the error of using BGP (Border Gateway Protocol) as a routing protocol.  Aspects of social networking, mostly in support of activism, are noted in chapter eight.  Chapter nine is a not-very-useful account of the Estonian cyber-attack of 2007, ten briefly mentions some others in eastern Europe, and eleven mentions the Georgian attack.  There is a rambling dissertation on war and various computer security problems in chapter twelve.  Chapter thirteen appears to be an attempt to provide some structure to the concept of cyberwar, but establishes very little of any significance.  Preparations, by some nations, for cyberwarfare are mentioned in chapter fourteen.  Most of the detail is for the US, and there isn’t much even for them.  A final chapter says that the existence of cyberwarfare could cause troubles for lots of people.

The content and writing is rambling and disorganized.  This reads more like a collection of fifteen lengthy, but not terribly well researched, magazine articles than an actual book.  There are many more informative resources, such as Dorothy Dennings’ “Information Warfare and Security” (cf. BKINWRSC.RVW) (which, despite predating this work by a dozen years, still manages to present more useful information).  Stiennon does not add anything substantial to the literature on this topic.

copyright, Robert M. Slade   2011     BKSRCYWR.RVW   20110325


History of crimeware?

C’mon, Infoworld, give us a break.

“There are few viable options to combat crimeware’s success in undermining today’s technologies.”

How about “don’t do dangerous stuff”?

“Crimeware: Foundation of today’s telescreens”

I’m sorry, what has “1984″ to do with the use of malware by criminal elements?

“Advancement #1: Form-grabbing for PCs running IE/Windows
Form grabbing, as its name implies, is the crimeware technique for capturing web form data within browsers.”

Can you say “login trojan”?  I knew you could.  They existed even before PCs did.

“Advancement #2: Anti-detection (also termed stealth)”

Oh, no!  Stealth!  Run!  We’re all gonna die!

Possibly the first piece of malware to use some form of stealth technology to hide itself from detection was a virus.  Perhaps you might have heard of it.  It was called BRAIN, and was written in 1986.

“Advancement #5: Source code availability/release
The source codes for Zeus and SpyEye, among the most sophisticated crimeware, were publicly released in 2010 and 2011, respectively.”

And the source code for Concept, which was, at the time, the most sophisticated macro virus (since it was the only macro virus), was released in 1995, respectively.  But wait!  The source code for the CHRISTMA exec was released in 1988!  Now how terrified are you!

“Crimeware in 2010 deployed the capability to disable anti-malware products”

And malware in 1991 deployed the capability to disable CPAV and MSAV.  With only fourteen bytes of code.  As a matter of fact, that fourteen byte string came to be used as an antivirus signature for a while, since so many viruses were included it.

“Advancement #7: Mobile device support (also termed man-in-the-mobile)”

We’ve got “man in the middle” and “meet in the middle.”  Nobody is using “man in the mobile” except you.

“Advancement #8: Anti-removal (also termed persistence)
As security solutions struggle to detect and remove crimeware from compromised PCs, malware authors are updating their code to permit it to re-emerge on PCs even after its supposed removal.”

I’ve got four words for you: “Robin Hood” and Friar Tuck.”

The author “has served with the National Security Agency, the North Atlantic Treaty Organization, the U.S. Air Force, and two Federal think tanks.”

With friends like this, who needs enemies?


Nightmare on Malware Street

The Scientific American, no less, has published an article on malware.  Not that they don’t have every right, it’s just that the article is short on fact or help, and long on rather wild conjecture.

The author does have some points to make, even if he makes them very, very badly.

We, both as security professionals and as a society, don’t take malware seriously enough.  The security literature on the subject is appalling.  It is hard to find books on malware, even harder to find good ones, and well nigh impossible to find decent information in general security books.  The problem has been steadily growing since it was a vague academic topic, and has been ignored for so long that, now that it is a real problem, even most security experts have only a tenuous grasp of it.

Almost all reports do sound like paranoid thrillers.  Promoting the idea of shadowy genius figures in dark corners manipulating us at will, this engenders a kind of overall depression: we can’t possibly fight it, so we might was well not even try.  This attitude is further exacerbated but the dearth of information: we can’t even know what’s going on, so how can we even try to fight it?

It is getting more and more difficult to find malware, mostly because we are constantly creating new places for it to hide.  In the name of “user friendliness,” we are building ever more complex systems, with ever more crevices for the pumas to hide in.

Yes, then he goes off into wild speculation and gets all “Reflections on Trusting Trust” on us.  Which kind of loses the valid points.


REVIEW: “Enterprise Security for the Executive”, Jennifer L. Bayuk

BKESCFTE.RVW   20110323

“Enterprise Security for the Executive”, Jennifer L. Bayuk, 2010,
%A   Jennifer L. Bayuk
%C   130 Cremona Dr., P.O. Box 1911, Santa Barbara, CA   93116-1911
%D   2010
%G   978-0-313-37660-3 0-313-37660-3
%I   ABC-CLIO, LLC/Praeger
%O   Audience i- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   175 p.
%T   “Enterprise Security for the Executive: Setting the Tone from the

In the introduction, Bayuk argues against security planning based on FUD (Fear, Uncertainty, and Doubt) and piecemeal implementation of security tools, and for a holistic and systemic approach to security.  She also recommends the promotion of a security culture in the top ranks of management, setting the “tone at the top” to consider security in a rational and realistic manner.

In chapter one, the author stresses that every organization has a culture, and that the actions (and particularly consistency of actions) by senior management set it, regardless of formal statements.  She also raises interesting points, such as that separation of security from the operational units creates perceptions which may be at odds with the security policy.  (I appreciate her championing of “no exceptions,” although I would argue that a formal exception policy could work as well.)  The discussion of threats and vulnerabilities, in chapter two, is weaker (and the questionable etymology of the term “patch” does not increase confidence in Bayuk’s technical background): ultimately it just seems to day that there are threats.  The title “Triad and True,” for chapter three, may refer to “protect, detect, correct” or the more conventional confidentiality, integrity, and availability.  In fact there are a number of other “triads” mentioned, and the text raises a number of good security concepts generally related to safeguards, but is somewhat scattered and incomplete.  Chapter four talks about risk management, but the process of using it to define a security program remains unclear.  Security factors related to organizational governance structure are examined in chapter five.  Standards, compliance and audit issues are discussed in chapter six.  Chapter seven reviews monitoring, incident response, and investigation.  Requirements for candidates for the position of CSO (Chief Security Officer) are noted in chapter eight.  A template job description is included, but the document is perhaps too narrowly specified to be applicable in many situations.

A fictional case study concludes the book.  (In the introduction, the author promised that all “security horror stories” would be true, but I assume reality is less important in case studies.)  This recapitulates, in narrative form, much of the content of the work.

There is much of value in the text, and it is useful to present that content as it relates to senior management.  Senior management support is, after all, the single most important factor in a successful security program.  However, as noted above, much important material is missing, along the way, and the volume appears to be focussed at a particular type of industry or corporation, and so be less useful to those outside that sphere.

copyright, Robert M. Slade   2011     BKESCFTE.RVW   20110323


Security awareness

A recent Twitter post by Team Cymru pointed at a (very brief) debate about the value of security awareness training.  It’s an issue that has concerned me for a long time.

I got interested in security starting with research into viruses and malware.  Early on, I did a lot of work reviewing the various available products.  In the responses I got to my efforts, one point was abundantly clear: everyone, almost without exception, was looking for the “perfect” antivirus.  Even though Fred Cohen had proven that such an animal could not possibly exist, everybody wanted something they could “set and forget.”

Notice two things.  The first is that perfect security doesn’t exist.  As (ISC)2‘s marketing phrase has it, security transcends technology.  The second point is that people aren’t particularly keen on learning about security.  They fight against it.  They have to be motivated into it.  And that motivation tends to be individual and personal.

Which means security awareness training is hard, and individual, and therefore expensive.  Expensive means that companies are loath to try it, in any significant way.  Hundreds of thousands or millions of dollars can be spent on a raft of security technologies, but security awareness programs can only get a budget of a few thousand a year.  Which means they can’t be individual, which means they won’t work very well, which means companies aren’t willing to try them.

The default position people take is to resist security awareness.  They don’t want to know extraneous stuff.  They just want to get on with their jobs.  So, even if you were to produce a really good security awareness program, there would undoubtedly still be some who would resist to the end, and not learn.  They wouldn’t benefit from the program, and they would still make mistakes.  So security awareness training won’t be perfect, either.  Sorry about that.

However, I’ve noticed something over the years.  I get asked, by all my friends and acquaintances, for advice about virus protection, and home computer protection.  Some learn the ins and outs, the dangerous activities, the marks of a phishing email message.  They never ask me to clean their machines.  Some just ask about the “best” antiviral software.  Usually after they’ve asked me to clean off a computer.  I identify what they’ve got, and tell them how they got it.  You shouldn’t [do music sharing|do instant messaging|go to all those weird Websites|open attachments you receive] I tell them.  They always have reasons why they must do those things.  (Not very good reasons, mind you, just reasons.)

You know that old medical joke about “Doctor, it hurts when I do this” “Well, do do that”?  It’s not funny.

People ask me what antivirus program I use at home.  Very often I don’t use one, unless I’m testing something.  (At the moment I’m testing two, and I’m about ready to take both of them off, since both of them can be real nuisances at times.)  There are long periods where I run without any “protection.”  I know what not to do.  My wife knows what not to do.  (After all, she read my first book seven times over, while she was editing it.)  We don’t get infected.  Not even by “zero days” or “advanced persistent threats.”

Security technology isn’t perfect.  Security awareness training isn’t perfect.  However, at present, and for as long as I can remember, the emphasis has been on security technology.  We need to give awareness more of a try.

Is security awareness “worth it”?  Is security awareness “cost effective”?  Well, we’ve been spending quite a lot on security technologies (sometimes just piecemeal, unmanaged security technologies), and we haven’t got good security.  Three arguments in favour of at least trying security awareness spending:

1)  When you’ve got two areas of benefit, and you are reaching the limits of “diminishing returns” in one area, the place to put your further money is on the one you haven’t stressed.

2)  Security awareness is mostly about risk management.  Business management is mostly about risk management.  Security awareness can give you advantages in more than just security.

3)  Remember that the definition of insanity is trying the same thing over and over again, and expecting a different result.


Get trained for emergencies

I’ve mentioned this before.

We seem to have had a number of disasters this year: earthquakes, tsunami, a few hurricanes (with one currently sweeping Japan, and another building right now off the east coast of the US), wildfires, you name it.  In the US, this is National Preparedness Month.

So this is a good time to get trained.  It gets you CPEs, usually for free.

And, in a disaster, it makes you part of the solution, not part of the problem.


REVIEW: “Above the Clouds”, Kevin T. McDonald

BKABVCLD.RVW   20110323

“Above the Clouds”, Kevin T. McDonald, 2010, 978-1-84928-031-0,
%A   Kevin T. McDonald
%D   2010
%G   978-1-84928-031-0 1-84928-031-2
%I   IT Governance
%O   UK#39.95
%O   Audience n+ Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   169 p.
%T   “Above the Clouds: Managing Risk in the World of Cloud Computing”

The preface does a complicated job of defining cloud computing.  The introduction does provides a simpler description: cloud computing is the sharing of services, at the time you need them, paying for the services you need or use.  Different terms are listed based on what services are provided, and to whom.  We could call cloud computing time-sharing, and the providers service bureaus.  (Of course, if we did that, a number of people would think they’d walked into a forty-five year time-warp.)

The text is oddly structured: indeed, it is hard to find any organization in the material at all.  Chapter one states that the cloud allows you to do rapid prototyping because you can use patched operating systems.  I would agree that properly up-to-date operating systems are a good thing, but it isn’t made clear what this has to do with either prototyping or the cloud.  There is a definite (and repeated) assertion that “bigger is better,” but this idea is presented as an article of faith, rather than demonstrated.   There is mention of the difficulty of maintaining core competencies, but no discussion of how you would determine that a large entity has such competencies.  Some of the content is contradictory: there are many statements to the effect that the cloud allows instant access to services, but at least one warning that you cannot expect cloud services to be instantly accessible.  Various commercial products and services are noted in one section, but there is almost no description or detail in regard to actual services or availability.

Chapter two does admit that there can be some problems with using cloud services.  Despite this admission some of the material is strange.  We are told that you can eliminate capacity planning by using the cloud, but are immediately warned that we need to determine service levels (which is just a different form of capacity planning).  In terms of preparation and planning, chapter three does mention a number of issues to be addressed.  Even so, it tends to underplay the full range of factors that can determine the success or failure of a cloud project.  (Much content that has been provided previously is duplicated here.)  There is a very brief section on risk  management.  The process outline is fine, but the example given is rather flawed.  (The gap analysis fails to note that the vendor does not actually answer the question asked.)  SAS70 and similar reports are heavily emphasized, although the material fails to mention that many of the reasons that small businesses will be interested in the cloud will be for functions that are beyond the scope of these standards.  Chapter four appears to be about risk assessment, but then wanders into discussion of continuity planning, project management, testing, and a bewildering variety of only marginally related topics.  There is a very terse review of security fundamentals, in chapter five, but it is so brief as to be almost useless, and does not really address issues specifically related to the cloud.  The (very limited) examination of security in chapter six seems to imply that a good cloud provider will automatically provide additional security functions.  In certain areas, such as availability and backup, this may be true.  However, in areas such as access control and identity management, this will most probably involve additional charges/costs, and it is not likely that the service provider will be able to do a better job than you can, yourself.  A final chapter suggests that you analyze your own company to find functions that can be placed into the cloud.

Despite the random nature of the book, the breadth of topics means it can be used as an introduction to the factors which should be considered when attempting to use cloud computing.  The lack of detail would place a heavy burden of research and work on those charged with planning or implementing such activities.  In addition, the heavily promotional tone of the work may lead some readers to underestimate the magnitude of the task.

copyright, Robert M. Slade   2011     BKABVCLD.RVW   20110323


New computers – Windows 7 – security and password aging

Today when I signed on I got a bit of a shock.  The computer warned me that my password was going to expire in 5 days, and I should probably consider changing it.

It was a shock because this is my computer, and I go along with current password aging thinking, which is that a) we can’t figure out who first figured that password aging was all that hot an idea, and b) if it ever was a good idea, in the modern computing environment, password aging is a non-starter.  Given that passwords should probably exceed 20 characters, and likely should be somewhat complex, trying to get people to choose a good one more than once every few years (when rainbow tables have been extended) is likely more security compromising than enhancing.

So, I went looking.  Having dealt with security for a number of years, it wasn’t too hard for me to figure out that I didn’t want the control panel (since I hadn’t seen anything along that line while I was modifying other settings), and that I likely wanted “Administrative Tools,” and under that “Local Security Policy.”  I had to read through all the options to determine that I probably wanted “Account Policies,” but, under that, it was obvious I wanted “Password Policy,” and, once there, “Maximum password age” stood out.  With no particular options or actions I went back to the menu bar until I found that “Action” had a “Properties” function, bringing up a dialogue box with an entry box with a number in it.  I figured that setting it to zero might turn off password aging, but I didn’t want to do anything that might require me to set a new password every time I signed on, so, when I saw that one of the tabs was “Explain,” I choose that.

(Allow me to digress for just a second here, and note that I suspect that the average home or small office user would not have found it easy to find this setting, and thus would have been stuck with the default.  And all that that implies.)

The explanation did confirm that setting the number of days to zero does mean the passwords never expire.  But it also told me that “It is a security best practice to have passwords expire every 30 to 90 days, depending on your environment. This way, an attacker has a limited amount of time in which to crack a user’s password and have access to your network resources.”

Microsoft, you’ve got to be kidding.  If an attacker has enough access to your system in order to start cracking your passwords, then they’ll almost certainly succeed within a few days.  Unless you’ve chosen a really, really good password, in which case it might be some years.  So 30 to 90 days makes very little sense.  (And, if you’re really serious about the maximum of 90 days, how come the entry box allows up to 999?)

But then, right down at the bottom, it tells me that “Default: 42.”

Oh, sorry, Microsoft.  Obviously you are kidding.  Nobody could take that seriously as a default.

(But then, why is that the default, and why is it enabled by default? …)

The issue prompted a little more thinking on my part.  Was it really 37 days (42 minus 5) since I’d installed the machine?  Ah, but then, it couldn’t be.  As previously noted, I had to take it back to the store to clear up some OS registration issue.  They, of course, didn’t ask what password I’d set, they just blew off the passwords.  So, the 37 days would start from that point, wouldn’t it?

Well, apparently not.  When I checked my journal, it was obvious that the 37 days started when I first started setting up the computer, not when the store eliminated the passwords.

Interesting version of “history” there, Microsoft …


The “Immutable Laws” revisited

Once upon a time, somebody at Microsoft wrote an article on the “10 Immutable Laws of Security.”  (I can’t recall how long ago: it’s now listed as “Archived content.”  And I like the disclaimer that “No warranty is made as to technical accuracy.”)  Now these “laws” are all true, and they are helpful reminders.  But I’m not sure they deserve the iconic status they have achieved.

In terms of significance to security, you have to remember that security depends on situation.  As it is frequently put, one (security) size does not fit all.  Therefore, these laws (which lean heavily towards malware) may not be the most important for all users (or companies).

In terms of coverage, there is little or nothing about management, risk management, classification, continuity, secure development, architecture, telecom and networking, personnel, incidents, or a whole host of other topics.

As a quick recap, the laws are:

Law #1: If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore

(Avoid malware.)

Law #2: If a bad guy can alter the operating system on your computer, it’s not your computer anymore

(Avoid malware, same as #1.)

Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore

(Quite true, and often ignored.  As I tell my students, I don’t care what technical protections you put on your systems, if I have physical access, I’ve got you.)

Law #4: If you allow a bad guy to upload programs to your website, it’s not your website any more

(Sort of a mix of access control and avoiding malware, same as #1.)

Law #5: Weak passwords trump strong security

(You’d think this relates to access control, like #4, but the more important point is that you need to view security holistically.  Security is like a bridge, not a road.  A road halfway is still partly useful.  A bridge half-built is a joke.  In security, any shortcoming can void the whole system.)

Law #6: A computer is only as secure as the administrator is trustworthy

(OK, there’s a little bit about people.  But it’s not just administrators.  Security is a people problem: never forget that.)

Law #7: Encrypted data is only as secure as the decryption key

(This is known as “Kerckhoffs’ Law.”  It’s been known for 130 years.  More significantly, it is a special case of the fact that security-by-obscurity [SBO] does not work.)

Law #8: An out of date virus scanner is only marginally better than no virus scanner at all

(I’m not sure that I’d even go along with “marginally.”  As a malware expert, I frequently run without a virus scanner: a lot of scanners [including MSE] impede my work.  But, if I were worried, I’d never rely on an out-of-date scanner, or one that I considered questionable in terms of accuracy [and there are lots of those around].)

Law #9: Absolute anonymity isn’t practical, in real life or on the Web

(True.  But risk management is a little more complex than that.)

Law #10: Technology is not a panacea

(Or, as (ISC)2 says, security transcends technology.  And, as #5 implies, management is the basic foundation of security, not any specific technology.)


Application complexity

Complexity is the enemy of security.

I always emphasize that point in the app sec domain when we have those two adjacent slides showing the old system/application environment, and the new.  I also point out that the “new” is now rather old.  When trying to update that slide I came up with eleven different levels without half trying.  Then, of course, you have to add bi-directional arrows between all adjacent components, and between all components on a given level, and between most components on adjacent levels.  Gets convoluted real fast.

Went to a real-time/component trade show recently, and was talking to some people who did embedded systems.  One of their promotional handouts shows a model that has six layers.  (And, of course, you have to add bi-directional arrows between all adjacent components, etc.)  And that’s just for “simple” embedded devices.

We seem to have lost the KISS battle a long time ago.  I guess now we have to try for KIASAPS (Keep It As Simple As Possible, Stupid).


New computers – Windows 7 – security and permissions (2)

Had an interesting experience.

There is a file I keep with some reference material.  For a number of years I’ve had this in the root directory of the drive on most of my machines.  I tried to update it the other day.

I couldn’t.

Windows 7 apparently would not let me modify anything in the top-level directory, even though properties showed that I had full control.  I tried a variety of different ways to make these permissions effective.  No dice.

Eventually I found myself somewhere that offered to let me blow off permissions for the root directory.  Permanently.

I thought it over, and eventually decided not to.  Generally, I’d agree that having the ability to write to the root directory might possibly be dangerous, in a somewhat bizarre set of circumstances.  But I decided that moving the file wasn’t that much of an issue.  So I let the permissions lie.

But I’m left with some questions.  My first reaction, once I got to the screen that would let me change the permissions, was to blow them away.  I was so frustrated by the roadblocks and lack of information provided by Windows 7 that I probably wasn’t thinking completely clearly.  And I’d suspect I’m not alone in this.

The other question is: why on earth did Windows 7 allow me to put the files there in the first place, but not allow me to modify them?  Isn’t the ability to put a file there in the first place even more of a security risk?


Blow your own horn

At a local conference, one presenter had a topic of “Blow Your Own Horn.”  The point was to be ready with some kind of success story (any kind of success story) ready for presentation.  Elevator pitch level stuff, except you aren’t selling anything specific, just success.

For example: “Last year you (the Board) approved purchase of a $50,000 licence fee for AV software on the email server.  This past month, records show it stopped 1 million viruses, which would otherwise have gotten through.  Had they been run, they would have cost $500 each (estimated industry average) to clean up.  Therefore, your prescient decision to spend $50,000 has returned $500,000,000 to the company.”

(OK, yes, any infosec professional knows the holes in that logic.  And you are turning it so that you are creditting the Board with what should be *your* success.  But you get the idea.)

I suggest everybody have a file in some readily accessible drawer, for scribbling down any idea you come up with along these lines, using company specific data.  One idea per page.  Any time you get called to the Boardroom (or, depending upon how many ideas you can come up with, any meeting) grab a sheet and read it in the elevator.  Whatever they asked you to talk about, walk in and start off with, “Thank you for your interest in X.  Before I begin, I’d like to let you know that, because of our investment in a $2,000 course in Ethereal, for one of the net sec admins, last April’s intrusion was detected within 5 hours, and we were able to ensure that all servers were hardened against that particular attack within only a further 12 hours, all within house.  Normally such an attack would be undetected for three days, and would have required outside help at a usual cost of $7,000.”

(Yes, this gets down into the weeds in regard to architecture, but security is a lot more about politics than technology.  And people love stories.)


New computers – Windows 7 – XP Mode fixes

I think I may finally be getting the hang of this XP Mode thing.  (I may also be fooling myself …)

As previously noted, XP Mode doesn’t access the “real” drive, but a virtual drive which is contained in one large file.  (Actually, seemingly a minimum of three, but only one appears to contain the drive “contents.”)  XP Mode does provide you with links to the real drives on the computer, but, while accessible from most Windows programs, since they are not mapped to drive letters, you cannot do anything with DOS programs, even though such programs run under XP Mode.

I figured I would have to create the directories, with files I wanted to work on, within the “virtual” drive, and, each time I made any modifications, remember to copy the new versions back to the “real” disk so they could be used under Win7.  Not only is this a nuisance, but it wastes disk space.  XP Mode takes up enough space as it is: starting at about 1.5 gig, by the time you get it up to speed with Windows updates, it has ballooned to 6 or 7 gig.  Any programs or file space you want come on top of that.  (And, since I no longer trust XP Mode to stay stable, I have been making backup copies as I have been doing the updating and adjusting of the virtual machine, wasting even more disk space.)  An annoyance, to say the least.

I can’t remember where I found it, but somehow I noted a reference to the actual description, within XP Mode, of the links to the real drives.  It looks just like a network reference to a shared resource.  So I tried mapping that format and creating a DOS “lettered” drive mapping (from within XP Mode).  So far it seems to work fine.

For those who’d like to try, the “network” name of the real computer seems to be TSCLIENT.  So, in order to create a link to the C: drive on the real computer, map to \\TSCLIENT\C .  (It does not seem to matter what your real machine’s name is, that name does not seem to be used in the reference.)