Or: One Of The Reasons Why I’ve Never Actually Bought Any Kindle Books from Amazon, And Only Install Free Books:
Or: One Of The Reasons Why I’ve Never Actually Bought Any Kindle Books from Amazon, And Only Install Free Books:
The US Treasury wants to show how much they care about security. To show how much, here are their password guidelines:
Must be at least 8 characters long.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain at least one numeric character.
Must contain at least one special character.
Must not have more than two repeating characters.
Must not repeat any of your last ten passwords.
Must not have been your password in during the last ten days.
Must not be a word in a language, slang, dialect, or jargon.
Must not be related to personal identity, history, environment, or other personal associations.
(No idea how they can enforce the last rule). But here’s the kicker. The last rule is:
Must not be shared or displayed in plain view.
Of course not, because you will be able to easily memorize it based on the rules above.
Here’s a hint for someone trying to break into one of their accounts: THE PASSWORD IS ON A POST-IT NOTE IN THE TOP DRAWER.
When will they realize a simple password is so much more secure?
“Learning from the Octopus”, Rafe Sagarin, 2012, 978-0-465-02183-3, U$26.99/C$30.00
%A Rafe Sagarin
%C 387 Park Ave. South, New York, NY 10016-8810
%G 978-0-465-02183-3 0-465-02183-2
%I Basic Books/Perseus Books Group
%O U$26.99/C$30.00 800-810-4145 www.basicbooks.com
%O Audience n+ Tech 1 Writing 2 (see revfaq.htm for explanation)
%P 284 p.
%T “Learning from the Octopus”
The subtitle promises that we will learn “how secrets from nature can help us fight terrorist attacks, natural disasters, and disease.” The book does fulfill that aim. However, what it doesn’t say (up front) is that it isn’t an easy task.
The overall tone of the book is almost angry, as Sagarin takes the entire security community to task for not paying sufficient attention to the lessons of biology. The text and examples in the work, however, do not present the reader with particularly useful insights. The prologue drives home the fact that 350 years of fighting nation-state wars did not prepare either society or the military for the guerilla-type terrorist situations current today. No particular surprise: it has long been known that the military is always prepared to fight the previous war, not this one.
Chapter one looks to the origins of “natural” security. In this regard, the reader is inescapably reminded of Bruce Schneier’s “Liars and Outliers” (cf. BKLRSOTL.RVW), and Schneier’s review of evolution, sociobiology, and related factors. But whereas Schneier built a structure and framework for examining security systems, Sagarin simply retails examples and stories, with almost no structure at all. (Sagarin does mention a potentially interesting biology/security working group, but then is strangely reticent about it.) In chapter two, “Tide Pool Security,” we are told that the octopus is very fit and functional, and that the US military and government did not listen to biologists in World War II.
Learning is a force of nature, we are told in chapter three, but only in regard to one type of learning (and there is no mention at all of education). The learning force that the author lauds is that of evolution, which does tend to modify behaviours for the population over time, but tends to be rather hard on individuals. Sagarin is also opposed to “super efficiency” (and I can agree that it leaves little margin for error), but mostly tells us to be smart and adaptable, without being too specific about how to achieve that. Chapter four tells us that decentralization is better than centralization, but it is interesting to note that one of the examples given in the text demonstrates that over-decentralization is pretty bad, too. Chapter five again denigrates security people for not understanding biology, but that gets a bit hard to take when so much of the material betrays a lack of understanding of security. For example, passwords do not protect against computer viruses. As the topics flip and change it is hard to see whether there is any central thread. It is not clear what we are supposed to learn about Mutual Assured Destruction or fiddler crabs in chapter six.
Chapter seven is about bluffing, use and misuse of information, and alarm systems. Yes, we already know about false positives and false negatives, but this material does not help to find a balance. The shared values of salmon and suicide bombers, religion, bacterial addicts, and group identity are discussed in chapter eight. Chapter nine says that cooperation can be helpful. We are told, in chapter ten, that “natural is better,” therefore it is ironic to note that the examples seem to pit different natural systems against each other. Also, while Sagarin says that a natural and complex system is flexible and resilient, he fails to mention that it is difficult to verify and tune.
This book is interesting, readable, erudite, and contains many interesting and thought-provoking points. For those in security, it may be good bedtime reading material, but it won’t be helpful on the job. In the conclusion, the author states that his goal was to develop a framework for dealing with security problems, of whatever type. He didn’t. (Schneier did.)
copyright, Robert M. Slade 2012 BKLNFOCT.RVW 20120714
Recently one of the bridges in my area was replaced by a new one. The new Port Mann Bridge is, at the moment, apparently the widest in the world, and will relieve congestion on the existing bridge, which has been a huge bottleneck for years. (Why do I keep flashing on an old saying about “traffic expands to fill anything made available for it …”?)
In order to pay for it, our currently right-wing) provincial government has formed a “public/private partnership” with a shell corporation (Treo) which gets to “lease” the bridge for about fifity years and put tolls on it.
I’m not sure I’ll have a lot of use for the Port Mann Bridge when it gets tolled (except to get out to the Olive Garden, until they build one closer in). It’s been such a bottleneck for so long that I’ve found all kinds of ways to avoid it. (There is another tolled bridge in the area, and I’ve only traveled over it once, in the first “free” week, just to find out where it was and went.) But I figured I’d get the decal anyway, especially since it gets you a discount, and some extra bucks (equivalent to about 20 free trips) to start off.
You’ll have heard about the debacle in regard to the phone registration, where some of the clerks were in business for themselves, and stole credit card numbers. So I figured I’d register via the Website. The process wasn’t too arduous, although I found it odd that American Express, which I use for most of my pre-authorized charges, wasn’t acceptable. (I also found out that my password algorithm, while it is long, complex, and uses mixed case and non-alphabetic characters, doesn’t generate a number in all cases. Apparently you have to have a number.)
I didn’t realize that I didn’t get a confirmation email until this morning, when I checked the spam filters. There it was.
And, I have to agree. If I was a spam filter, I’d have said it was spam, too. It’s a mess. Looking at the body, I can’t make out anything it is trying to do (other than create all kinds of buttons). The spam report says:
0.00 NO_REAL_NAME From: does not include a real name
0.00 BSF_SC0_MISMATCH_TO Envelope rcpt doesn’t match header
0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts
0.00 URI_TRUNCATED BODY: Message contained a URI which was truncated
0.00 HTML_MESSAGE BODY: HTML included in message
Treo itself seems to use a system called Barracuda, and this system also scores the message as spam. (It also seems to have an AV scanner, which appears to be turned off. Apparently Treo is not concerned about sending viruses out to infect other people.)
So, the Treo people don’t seem to be very concerned about information security. Which gets me thinking:
Is the bridge safe?
I have just got off the phone with a marketroid. In the course of our conversation (no, I usually don’t talk to them, but this turned our to be a special case), I was explaining to her about ISC2 and the CISSP. She was puzzled by an annotation on my file with her company, and it wasn’t making sense in terms of what I did, and what their ERM/CRM system was saying about me.
When she looked at the ISC2 Website, during our conversation, she immediately noted the “Security Transcends Technology” slogan. I dimly recall the great fanfare when this was introduced about 9 or ten years back: our (marketing department’s) proud statement that we were not mere technologists, but covered the whole realm of security.
Well, apparently that’s not what it says to some people. The simple existence of the “technology” word in our slogan seems to trigger an immediate pegging of us as mere techies. All of us CISSPs are just basic firewall admins. We are not
Back to the marketing board … ?
1) I keep telling people, the next security risk is the next technology that is there solely for “convenience.”
2) So, your credit cards are going to be in your cell, your bank access is going to be in your cell, your car keys are going to be in your cell, your house keys are going to be in your cell … All your eggs in one basket–that gets dropped in the toilet, left in coats, drops between couch cushions, gets picked up in bars …
3) You can even unlock it remotely, so social engineering is on the table (“Hey, Mr. iPhone User, we’re from the gas company, and your neighbours are reporting a strong smell from your place, any way you could come back here from your conference on the other coast we found out about from your Facebook account and let us in?”)
4) You could use Wifi at close range, but for remote it probably has to have a unit that hooks up to your phone. (I suppose another option is to have the locking device be a cellular device, but that seems excessive.) So, as was mentioned, you have to worry about power outages. Also interference from other Wifi devices, portable phones, cell phones, microwave ovens …
“The 2012 Norton Cybercrime Report, released Wednesday, says more than 46 per cent of Canadians have reported attempts by hackers to try to obtain personal data over the past 12 months,” according to the Vancouver Sun.
Well, since I see phishing every single day, and malware a few times times per week, what this survey is *really* saying is that 54% of Canadians don’t know what phishing and malware looks like.
(And you others don’t need to gloat: apparently the same figure holds globally …)
Kinda depressing …
What is true of teachers is also true for recruiters.
I am old enough to have gone through group interviews, hostile interviews, video interviews, multi-part phone interviews, questionnaire interviews, weird question interviews, “waht do you want to be when you grow up” interviews, and all the other “latest and greatest” ideas that swept through HR-land at one time or another. I understand the intents of the various processes, and what they will and won’t tell you. (When I do recruiting myself, I use the “prepared” interview model–know what it is you want, and how to find out if the candidate has it.)
So, apparently the next big thing in recruiting is to use technology. Use robots. (Well, actually just avatars and virtual game worlds.) Use computerized questionnaires. (They work just as well, and as badly, as paper ones.) Use video. (Wait. We did that already. Oh, I see, use videotape.)
It doesn’t take too long to see what the intent is here. To save time and money.
And, doing it cheaper will work out just as well as doing it cheaper always has.
“There is hardly anything in the world that some man cannot make a little worse and sell a little cheaper, and the people who consider price only are this man’s lawful prey. – John Ruskin
Someone has made yet another prediction that teachers will shortly be replaced by technology. Teacherless classrooms are, apparently, the way of the future.
I recall this prediction being made, to great fanfare, thirty years ago. I was, at the time, a public school teacher, and at a conference on science education. The first speaker of the day took a bit of time out from his presentation to discuss the issue, and stated that any teacher who *could* be replaced by a computer, *should* be replaced by a computer. His point was that teaching is a profession, not the push button assembly line job that many people seem to mistake it for. Any teacher who is so repetitive, so lacking in imagination, so single dimensional, so robotic that they can be replaced by a machine or a process, should be replaced. A teacher should be able to handle more than “do you want a diploma with that?”
(Go ahead. Make my day. Ask me if this is going to be on the final.)
One way or another I have been teaching for more than forty years. I have taught (in the public school system) every grade level from kindergarten to grade twelve. I have taught in two-year colleges, and at the post graduate level in academia. I have taught for business and in commercial training.
I also have a rather broad experience in “distance education.” I have participated as both director and teacher in video and audio production of teaching materials. I have created online tutorials for computer-based courses. I have designed and programmed interactive computer-based training. Over twenty-five years ago I ran the telecommujnications component of the World Logo Conference, which was the first (and possibly still only) event to fully integrate onsite with online participation. (And which also, since Logo is a “teaching” language, involved many teachers and computer educators.)
I have mentioned that I don’t like Webinars. That isn’t because I inherently object to the very idea. I think a good Webinar might be an interesting experience. But, so far, nobody has figured out that that good distance education requires more work, not less. (In the same way, publishers of textbooks haven’t yet understood that a good textbook requires better writing, not worse.) We figured this out at the WLC more than two decades ago. The developers of debuggy figured it out about programmed learning more than three decades ago.
There are some, few, isolated examples of individual lessons that have been done well using video, or the Web, or programmed learning, or various other forms of technology. But they are, still, few and isolated, and drowned out in the vast sea of mediocre and wretched attempts. Technology has uses, and good teachers know that. It’s great for drill and practice in some areas. The Web is a great place for discovery and research. Letting a kid loose on the Internet without guidance is a recipe for disaster. We are a long way, a very, VERY long way, from the use of technology to create entirely teacherless classrooms.
Yes, we can certainly use extra training for a number, possibly a very large number, of teachers who are afraid of the technology and don’t use it well. But don’t tell me that you can replace them with droids until you can show me that you understand what teaching is all about.
Now, I’m willing to believe that Shaw is not being deliberately mendacious or misleading. There is probably someplace, or some part of Shaw’s network, that transfers data faster than other vendors in that area or for that component.
And, I have to admit that, since I am not, generally, a high volume user, even the basic service I have for them is usually sufficient. In the afternoon and most evenings.
But, right where I am, Shaw can’t seem to get any data moving in the morning.
I first noticed this a few months ago, and spent quite a bit of time contacting Shaw’s generally unhelpful help staff. This involved them asking me to try a different network cable to the router, or a different computer, or bypassing the router, and checking their speedtest. (None of which made any difference.) They finally sent someone around. The next day. Of course, by that time the problem had resolved. But by that time I’d noticed that traffic was only slow in the morning.
So, over the past few months there have been numerous mornings when it has been slow. I don’t mean just “they promised me speeds up to 5 Mbps and I’m only getting 1.39″ slow, I mean “they promise a minimum of 1 Mbps and their own speedtest is showing 0.02 Mbps and that’s only when it actually completes” slow. It doesn’t happen every morning, but often enough to see that the pattern is extremely regular, starting about 8:30 am, and trailing off (as in, network speeds start working again) around 11:30 am.
I’ve reported this to Shaw’s technical support, mostly through Twitter, since it takes less time than fighting your way through their phone voice menu tree and it doesn’t matter what reporting method you use, they never do anything anyway. (Along the way I have learned that the ShawHelp Twitter people have a “Hello $username. If you follow and DM your account info and phone number we can look into it for you” macro, and that, if you submit details about the speeds and the fact that you have tried various configurations, you will receive a “No issues in your area, modem signal is good. Is computer direct to modem or are you using router?” message about 3 or 4 hours later.
It’s been annoying, but I’ve lived with it for a while. Except that, for the past week and a half, this has now happened every single day. It is pretty much impossible to do anything in the morning. This morning was particularly bad: I couldn’t even get the speedtest to run, for the most part.
So, if I suddenly stop posting, you’ll kn()^(*%(&*(&*(&^ NO CARRIER
In the February 2012 edition of Computer, a sidebar to an article on “Web Application Vulnerabilities” asks the question: “Why don’t developers use secure coding practices?”  The sidebar provides the typical cliches that programmers feel constrained by security practices and suggests that additional education will correct the situation. Another magical solution addressing security concerns is to introduce a secure development process. However, going from improved security education or a new secure development process requires a plan to connect the current development processes to one that is more secure, as the cartoon suggests. Instead of looking for a single solution, another approach is to identify the threat agents, threats, vulnerabilities, and exposures. After identifying these, the next step is to establish a cost-effective security policy that will provide safeguards.
Many view programmers as the primary threat agent in a development environment, however Microsoft reports that more than 50% of the security defects reported were introduced in the design of a component . Microsoft’s finding suggests that both designers and programmers are threat agents. According to Microsoft’s data, designers and programmers introduce vulnerabilities into an application; it is therefore appropriate to identify all of the software development roles (analysts, designers, programmers, testers) as potential threat agents. Viewing software developers as threat agents should not imply that the individuals filling these roles are careless or criminal, but they have the greatest opportunity to introduce source code compromising the confidentiality, integrity, or availability of a computer system.
Software developers can expose assets accidentally by introducing a defect. Defects have many causes, such as oversight or lack of experience with a programming language, and are a normal part of the development process. Quality Assurance (QA) practices, such as inspections and unit testing, focus on eliminating defects from the delivered software. Developers can also expose assets intentionally by introducing malicious functionality . Malicious functionality can take the form of a variety of attacks, such as worms, Trojans, salami fraud, and other types of attacks . A salami fraud is an attack in which the perpetrators take a small amount of an asset at a time, such as the “collect the round-off” scam . An individual interested in introducing illicit functionality will exploit any available vulnerability. Identifying all of the potential exposures and creating safeguards provides a significant challenge to the security analysts,but by analyzing the development process, it is possible to identify a number of cost-effective safeguards.
Addressing these exposures, many researchers recommend enhancing an organization’s QA program. One frequent recommendation is to expand their inspection practice by introducing a checklist for the various exposures provided by the programming languages used by developers . Items added to a security inspection checklist typically include functions such as Basic’s Peek() and Poke() functions, C’s string copy functions, exception handling routines, and programs executing at a privileged level . Functions like Peek() and Poke() make it easier for a programmer to access memory outside of the program, but a character array or table without bounds checking produces similar results. A limitation of the language-specific inspection checklist is that each language used to develop the application must have a checklist for the language. For some web applications, this could require three or more inspection checklists, and this may not provide safeguards for all of the vulnerabilities. Static analyzers, such as the SEMATE research, being sponsored by the National Institute of Standards and Technology (NIST), is an approach automating some of the objectives associated with an inspection checklist, but static analyzers have a reputation for flagging source statements that are not actually problems .
Using a rigorous inspection process as a safeguard will identify many defects, but it will not adequately protect from exposures due to malicious functionality. An inspection occurring before the source code is placed under configuration control provides substantial exposure. In this situation, the developer simply adds the malicious functionality after the source code passes inspection or provides the inspection team a listing not containing the malicious functionality. Figure 1 illustrates a traditional unit-level development process containing this vulnerability.
As illustrated in Figure 1, a developer receives a changer authorization to begin in the modification or implementation of a software unit. Generally, the “authorization” is verbal, and the only record of the authorization appears on a developer’s progress report or the supervisor’s project plan. To assure that another developer does not update the same source component, the developer “reserves” the necessary source modules. Next, the developer modifies the source code to have the necessary features. When all of the changes are complete, the developer informs the supervisor who assembles a review panel consisting of 3 to 5 senior developers and/or designers. The panel examines the source code to evaluate the logic and documentation in the source code. A review committee can recommend that the developer make major changes to the source code that will require another review, minor changes that do not require a full review, or no changes and no further review. It is at this point in the development process where the source code is the most vulnerable to the introduction of malicious functionality, because there are no reviews or checks before the software is “checked-in”.
Another limitation of inspections is that the emerging Agile methodologies recommend formal inspections. Development methodologies, such as eXtreme programming, utilizes pair-programming and Test Before Design concepts in lieu of inspections, and Scrum focuses on unit testing for defect identification [7, 8]. Using inspections as the primary safeguard from development exposures limits the cost savings promised by these new development methodologies and does not provide complete protection from a developer wishing to introduce malicious software.
Programming languages and the development process offer a number of opportunities to expose assets, but many of the tools, such as debuggers and integrated development environments, can expose an asset to unauthorized access. Many development tools operate at the same protection level as the operating system kernel and function quite nicely as a worm to deposit a root kit or other malicious software. Another potential exposure, not related to programming languages, is “production” data for testing. Using “production” data may permit access to information that the developers do not have a need to know. Only a comprehensive security policy focusing on personnel, operation, and configuration management can provide the safeguards necessary to secure an organization’s assets.
Many organizations conduct background checks, credit checks, and drug tests when hiring new employees as part of their security policy. Security clearances issued by governmental agencies have specific terms; non-governmental organizations should also re-screen development personnel periodically. Some would argue that things like random drug tests and periodic security screenings are intrusive, and they are. However, developers need to understand that just as organizations use locks on doors to protect their physical property, they need to conduct periodic security screenings to protect intellectual property and financial assets from those that have the greatest access.
Another element of a robust development security policy is to have separate development and production systems. Developing software in the production environment exposes organizational assets to a number of threats, such as debugging tools or simply writing a program to gain unauthorized access to information stored on the system. Recent publicity on the STUXNET worm suggests that a robust development security policy will prohibit the use of external media, such as CD’s, DVD’s, and USB devices . Another important point about the STUXNET worm is that it targeted a development tool, and the tool introduced the malicious functionality.
Configuration management is the traditional technique for controlling the content of deliverable components and is an essential element of a robust security policy . Of the six areas of Configuration Management, the two areas having the greatest effect on security are configuration control and configuration audits. Version control tools, such as Clearcase and CVS, provide many of the features required by configuration control. A configuration audit is an inspection occurring after all work on a configuration item is complete, and it assures that all of the physical elements and process artifacts of the configuration item are in order.
Version control tools prevent two or more programmers from over-writing each other’s changes. Most version control systems permit anyone with authorized access to check source code “in” and “out” without an authorized change request, and some do not even track the last access to a source module. However, in a secure environment, a version control system must integrate with the defect tracking system and record the identification of the developers who accessed a specific source module. Integrating the version control system with the defect tracking system permits only the developer assigned to make a specified change to have access to the related source code. It is also important for the version control system to track the developers that access the source. Frequently, developers copy source code from a tested component or investigate the approach used by another developer to address a specific issue and need access to read source modules that they are not maintaining. This also provides a good research tool to introduce malicious functionality into another source module. By logging source module access, security personnel can monitor access to the source code.
Configuration audits are the second management technique making a development organization more secure. Audits range in formality from a clerk using a checklist verifying that all of the artifacts required for a configuration item are submitted, to a multi-person team assuring that software delivered produces the submitted artifacts and the tests adequately address risks posed by the configuration item . Some regulatory agencies require audits for safety critical applications/high reliability applications to provide an independent review of the delivered product. An audit in a high security environment addresses the need to assure that delivered software does not expose the organizational assets to risk from either defects or malicious functionality. Artifacts submitted with a configuration item can include, but are not limited to, requirements or change requests implemented, design specification, test-script(s) source code, test data, test results and the source code for the configuration item. To increase confidence that the delivered software does not contain defects or malicious functionality, auditors should assure that the test cases provides 100% coverage of the delivered source code. This is particularly important with interpreted programming languages, such as python or other scripting languages, because a defect can permit the entry of malicious code by a remote use of the software. Another approach auditors can use to assure coverage is to re-test the configuration item with the same test data to assure that the results from the re-test match those produced in the verification and validation procedure.
Adopting these recommendations for a stronger configuration management process modifies the typical unit-level development process, illustrated in Figure 1, to a more secure process illustrated in Figure 2. In the more secure process, a formal change authorization is generated by a defect tracking system or by the version control system’s secure change authorization function. Next, a specified developer makes the changes required by the change authorization. After implementing and testing the changes, the developer checks all of the artifacts (source code, test drivers, and results) into the version control system. Checking the artifacts automatically triggers a configuration audit of the development artifacts. Auditors may accept the developer’s changes or create a new work order for additional changes. Unlike the review panel, the auditors may re-test the software to assure adequate coverage and that the test results match those checked in with the source code. Making this change to the development process significantly reduces the exposure to accidental defects or malicious functionality because it is verifying the source code deployed in the final product with all of its supporting documentation.
Following all of these recommendations will not guarantee the security of the software development environment because there are always new vulnerabilities from social engineering. However, using reoccurring security checks, separating developers from production systems and data, controlling media, and using rigorous configuration management practices should make penetration of your information security perimeter more difficult. It is also necessary to conduct a periodic review of development tools and configuration management practices because threat agents will adapt to any safeguard that does not adapt to new technology.
During my years of work as a consultant and trainer in the information security world, I’ve noticed a few patterns that usually exist in those who do very well in the industry vs those who just make it by. I decided to draft this article to share some of the key elements and more importantly, give somewhat of a metric to gauge where you the reader currently sit.
Basically there seems to be 4 key levels that I consider to be different milestones or “levels of understanding” as related to this field. I originally heard a concept like this many years ago as it relates to music. Now I’m going to relate this more directly to penetration testing and exploit writing, but you can apply it to any area of specialization in information security.
Let’s start with Level 1.
Level 1 – Interested Newbie – Unknowing and Unconscious
You don’t know really how to learn these arts, plus you’re unconscious of the fact that you don’t know.
This level is where you’ve probably got your Security+ or you’ve gained the equivalent knowledge base by reading and “tinkering”. You haven’t learned how to exploit anything yet. You know what port scanning is, but you’ve never really done it. You’re familiar with the terms Trojan, malware, rootkit, exploitation, etc. But you haven’t actually had hands on with any of this, at least not knowingly **smile**.
Eventually you start playing with some tools. If you’re a person who’s said, “I downloaded Backtrack but I haven’t figured out how to do anything with it yet,” then you most likely fall into this category. Linux is still a big dark scary cloud for you (if you come from a Windows background), and vice versa if you’re a Linux background person. You might have even taken a CEH class, and you feel like you saw a lot of cool stuff, but you can’t really sit down and reproduce much of it.
Level 2 – Practicing Youngster (a year or two in) Knowing and Unconscious
You now know a little bit about how to learn these techniques but you’re still unconscious of what you don’t know.
You’re still new to the field. You might have a job that requires you to work with firewalls a little or maybe support of some type. Your inner hacker curiosity has you spending lots of time tinkering with security tools and techniques even though it might not be part of your job. You can run some security tools. You are not “too” afraid of Linux anymore. You’ve been able to get Backtrack to communicate with your network. You’ve learned how to set IP addresses in Linux, and you’re comfortable doing basic things from the shell. You have learned how to use Nmap somewhat. Additionally you’ve also found one or two forums which you like to visit and learn new things from.
The information in these forums end up being pretty basic, but at the level you’re at right now, you’ve found the more advanced forums like the official Metasploit one to be too technical for you. You maybe visit your first Blackhat/Defcon conference. You leave there realizing for the first time how much you really don’t know. You reach a point of information overload. You enjoy the conference and see lots of eye-popping demonstrations, but you don’t really understand how it works or what the implications really are. You leave Blackhat with the gut punching realization that you lack the technical ability to demonstrate or recreate anything you’ve witnessed. And this is where you actually start to learn.
Level 3 – Serious Practitioner – Knowing and Conscious
At this point you know how to learn the skills, and you’re conscious of the fact that there are limitless amounts of stuff you don’t know, and additionally you have an idea about the many different aspects and fields within information security. You truly grasp that reverse engineering, exploit writing, and penetration testing are not one big blob of variations of the same thing. You realize that they can all complement each other but they’re not the same. You have gained enough skills to be great one day, but you might not ever truly have the time, or invest the time required, to get to the next level.
It’s been a couple of years or so since you went to Blackhat/Defcon the first time. Now you go back and you understand exponentially more than you did the first time. You’re able to come back and duplicate most of what you’ve seen in the presentations. You also understand what you’ve seen well enough to demo and present it to others. If you’ve never learned to code you have at this point realized that it’s going to hinder you at some point in your career. You’ve started to learn some scripting languages and you’re pretty good with them (Perl, Python, etc). You’re aggressively trying to learn C not C+, C++, C# or any of those, but just C. Why? Because a respected security professional told you that you really needed to learn it.
You’re also trying to learn Assembly because you’ve been told that you really needed to know it to write exploits. You view exploit writing and reversing as the next thing you want to accomplish from a learning perspective. But you’ve realized you need to know programming concepts and constructs well to truly reverse and write exploits. You are able to follow exploit writing examples without problem, but your understanding of memory, calls, packing, etc. keeps you from doing it “for real”. If someone gave you a Backtrack CD and a couple of Windows computers and asked you to demonstrate a client side exploit, a server side exploit, and how scanning works, you could do it with no problem. You know TCP and IP like you know your name. You can look at a packet capture and instantly pick out three-way handshakes and other session establishments. You still don’t really know web applications that well, because you still don’t know programming and applications that well. You can demonstrate fluently all of the OWASP top 10, but you still feel there’s a lot missing.
Congratulations, you’ve reached the point where most security professionals stop or plateau.
Level 4 – Expert – Knowing and Unconscious
You are above most in both skill and knowledge. You know that there are things you don’t know, but you learn them frequently. It’s almost as if it’s a drug to you. You sit with your laptop daily/nightly and plug into forums, YouTube videos, presentations, coding etc. Every night for you seems as if you’ve plugged your brain into the Matrix and had information dumped into it.
While you know there are things you want to learn, you don’t even know or bother to figure out “how” you’re learning them. Your skills are mature enough to where you just “do”. You learn without knowing how. When you do present or demonstrate stuff to others, you’re often told that you go way too fast and really, you assume that your audience understands more than they actually do.
There is no looking back now. The only thing that drives you really is learning more. You’re also very much into finding new exploits, and finding new ways to use old exploits. While information security may or may not be your job, it is now your passion.
Level 5 – Leader – Unknowing and Unconscious
You are now at the very top of the field. Whether the rest of the world knows it or not is not relevant. You are not conscious of what you don’t know because you simply don’t care. You have obtained a body of knowledge that puts you in a position to where if you want to learn something, you simply learn it. Nothing about exploitation, or information security seems out of your reach. The only reason you don’t learn something is because you don’t want to. You are now a creator, a driver, and an industry shifter. You are one of the people who puts out what others must learn. The industry doesn’t control what you need to know, you control what the industry needs to know. A few names come to mind for me: HD Moore, Dan Kaminsky and others. For example, Metasploit, the brainchild of HD Moore, literally changed exploitation and exploit development forever. Dan Kaminsky’s DNS research a few years ago caused visible shifts in attention paid to infrastructure security as related to things like DNS.
Most people will never make it to this level. Not because people aren’t smart enough. It’s because maybe you won’t be able to put the time in. Or maybe you won’t have access to resources needed (some countries filter all Internet traffic). To say HD Moore accomplished what he has simply because he is smart would completely ignore all the obvious huge amounts of time and hard work he’s put in over the years. I think one has to have certain proclivities to reach this level, but I the time investment is more important than anything else.
This post was written by Mike Sheward, a contributor to InfoSec Resources. InfoSec Institute is the best source for high quality information security training.
“Managing the Human Factor in Information Security”, David Lacey, 2009, 978-0-470-72199-5, U$50.00/C$55.00/UK#29.99
%A David Lacey
%C 5353 Dundas Street West, 4th Floor, Etobicoke, ON M9B 6H8
%G 978-0-470-72199-5 0-470-72199-5
%I John Wiley & Sons, Inc.
%O U$50.00/C$55.00/UK#29.99 416-236-4433 fax: 416-236-4448
%O Audience n- Tech 1 Writing 2 (see revfaq.htm for explanation)
%P 374 p.
%T “Managing the Human Factor in Information Security”
The preface states that the intent of the book is to identify and explain the range of human, organizational, and social challenges when trying to manage security in the current information and communications environment. It is hoped this material will help manage incidents, risks, and design, and assist with promoting security systems to employees and management. A subsidiary aim is to leverage the use of social networking.
Some aspects of security are mentioned among the indiscriminate stories in chapter one. Chapter two has more tales, with emphasis on risks, and different people you encounter. Generic incident response and business continuity material is in chapter three. When you know the risk management literature, you can see where the arguments in chapter four come from. (Yes, Donn, we know quantitative risk analysis is impossible.) The trouble is, Lacey makes all of them, and therefore comes to no conclusion. Chapter five has some points to make about different types of people, and dealing with them. Unfortunately, it’s hard to extract the useful bits from the larding of stories and verbiage. (Given the haphazard nature of the content, making practical application would be even more difficult.) Aspects of corporate culture are discussed, in an unstructured fashion, in chapter six. Chapter seven notes a number of factors that have appeared in successful security awareness programs, but doesn’t fulfill the promise of helping the reader design them. Chapter eight is about changing organizational attitudes, so it’s an (equally random) extension of chapter six. It also adds some more items on training programs. Chapter nine is about building business cases. Generic advice on creating systems is provided in chapter ten. Some even broader advice on management is in chapter eleven. A collection of some points from throughout the book forms a “conclusion.”
There are good points in the book. There are points that would be good in one situation, and bad in another. There is little structure in the work to help you find useful material. There are stories about people, but not a survey of human factors. Lacey uses lots of aphorisms throughout the text. I am reminded of the proverb that if you can tell good advice from bad advice, you don’t need any advice.
copyright, Robert M. Slade 2012 BKMHFIIS.RVW 20120216
SMS spam on Bell seems to have suddenly jumped. On Tuesday, both Gloria and I got spam saying we had won something from Apple. Today, we both got similar spam.
Today’s message came “from” 240-393-8527. It asked us to visit hxxp://www.apple.com.ca.llhf.net 
Neither F-Secure nor VirusTotal had anything to say about it, but it is safe to assume that the site is dangerous. Avast now blocks it.
In trying to contact Bell about this, I noted that Bell’s Website “contact” page lists a “Chat with us” function that simply does nothing if agents are busy, and no means of contacing Bell via email. “How to escalate a complaint” returns the same page, with the same lack of response from the agent button. When I finally did reach an agent, “he” was pretty clueless about the whole situation. I strongly suspected “he” was a rather simplistic program.
Having Given the agent the information above, his response was to ask “Samuel: I understand. Have you registered under apple newsletter list?” He then asked for my name and phone number (which I had previously given him at the beginning of the session), and then told me “Samuel: I unfortunately cannot unsubscribe that spam for you from here as I see in your account.” He offered to cut the SMS/texting function on my account.
That’s it. That’s the only solution. Bell doesn’t have any spam filtering on SMS, even when the spam is as obvious, egregious, and malicious as this one. (Yes, they do have a spam filtering option, if you want to pay them an extra $5 per month. Given the quality of support, I think I’ll give that a miss.)
 Note that this isn’t apple.com, the trailing “domains” override that. This domain is listed to:
Domain Name ………………… llhf.net
Name Server ………………… ns5.myhostadmin.net
Registrant Name …………….. jun wang
Registrant Organization ……… wang jun
Registrant Address ………….. shang hai shi xu hui qu
Registrant City …………….. shang hai
Registrant Province/State ……. SH
Registrant Postal Code ………. 200087
Registrant Country Code ……… cn
Registrant Phone Number ……… 02178861511
Registrant Fax ……………… 02178861511
Registrant Email ……………. email@example.com
Kalamazoo cop, on vacation, with his wife, visits Nose Hill Park in Calgary. He feels threatened that two complete strangers feel free to try and strike up a conversation.
Writes a letter to the Calgary Herald saying how threatened he feels since he wasn’t allowed to bring his gun.
It was later confirmed that these threatening strangers were handing out free passes to the Stampede.
More details can be found in at least 13 news stories by searching the Web.