First off, I probably have to modify the perception that I may have left, in this series of postings, that I hate the Mac and everything it stands for. Not true. While I find the “Apple knows best” attitude frustrating at times (all right, many times), the MacBook Pro that I purchased is a nice machine in many ways. For one thing, it’s the most powerful machine I’ve got at the moment. (Until I get the time to install the new desktop, anyway.) For another, it hibernates (or suspends, or sleeps, or whatever you want to call it) really well. I appreciate that ability to simply close the lid, and open it up, and all my stuff is still ready to go, within seconds. (This has been a particular frustration with the Asus netbook, which sometimes hibernates, and sometimes decides to think about it. Forever. Or, until I take the battery out, whichever comes first.) I like the ongoing and very accurate battery indicator (although I’ll have more to say about that in another post).
It was the battery indicator that first alerted me to the issues with Flash. As one of my Mac resource helpers noted when I found this out, Flash may, single-handedly, be responsible for global warming. It is rather odd to pull up a YouTube video, or any other page with a high Flash content (news sites are particularly vile in this regard) and watch the battery life almost instantly cut in half (or drop even further). To get your battery life (well, most of it, anyway) back again, all you have to do is drop the offending Flash page.
The thing is, I’ve never noticed this before on my other laptops. Certainly Flash, on Windows, doesn’t have anything like that same effect on the battery life. Yes, it’s more of a drain, and, yes, you’ll probably have to keep an eye on heating issues. But the battery life isn’t half of what it was simply because of viewing videos.
Apple doesn’t like Flash. The converse may also be true. Because, despite the Mac’s much-vaunted prowess in multimedia areas, online video definitely seems to be a problem for it.
At home, we’ve recently been watching some TV programs via the Internet. (We’ve done this because, at home, I get Internet service from Shaw, which provides our cable TV, as well. And, they seem to be just as unreliable at providing the uninterrupted TV feed as they do at providing Internet service or help. So we’ve had to fall back on the Internet to catch up on shows we’ve missed while the cable was out.) Because of this, I’ve had a chance to do some comparison between a seven-year old Windows (XP) desktop machine, and a brand new MacBook Pro. The old Windows machine wins, hands down. We’ve watched streaming feeds of shows from the company Websites of CBC, GlobalTV, and Bravo, all at the standard presented resolution, and in the full-screen display. All of these sites use Flash. And the old (seven years old, remember) Windows machine, using Firefox, has won every round against the Mac, using Safari. The streaming is just as good (which is odd, considering the sheer age of the Windows box), but the Mac tends to lock up (or go random places) any time we use the controls to rewind, or pick up a missed segment.
To repeat what I started out with, the Mac is great in many areas. Viewing Twitter, even with the new (and heavily script-laden) interface, the Mac is very much faster, and Safari opens new windows and loads them quickly. Which I why I found the online video weakness to be so odd …
While at CanSecWest, I was noting a news story about how somebody had, yet again, defrauded the US government and military by selling them a terribly sophisticated computer algorithm that promised to find secret information about enemies and/or terrorists, but actually didn’t work. I suspect that this will be a complex case, since the vendor will undoubtedly claim that his work is so sophisticated and complicated that it does work, it’s just that the users didn’t understand it.
In view of this, I found it really interesting to note a very similar case, just a few days later. Computerized Voice Stress Analyzers (CVSAs) have been promoted and sold for a least 25 years now. This despite the fact that, four years ago, the U.S. Department of Justice did a study and concluded that “VSA programs show poor validity -neither program efficiently determined who was being deceptive about recent drug use. The programs were not able to detect deception at a rate any better than chance … The data also suggest poor reliability for both VSA products when we compared expert and novice interpretations of the output.”
In a sense the CVSA case is much worse, because, since it is a private company selling to private companies, there is nobody to say that these people are a) wasting money, and b) making poor hiring decisions based on what is essentially a coin flip.
I’m working through a book to learn about my new Mac. (You’ll see the review eventually, and probably recongize some of this text when you do.) It provides the information necessary to begin to operate the computer, but it also gives the lie to the statement that the Mac is easy to use. There are a huge number of options for different functions, so many that it is impossible to remember them all. The material is generally organized by topic, but there are notes, tips, and mentions buried in the text, and it is almost impossible to find these again, when you go back to look for them. (The “delete” key definitely needs to be listed in either the index or the key shortcuts appendix.)
One of the appendices is a Windows-to-Mac dictionary, which can be quite handy for those who are used to Microsoft systems. It could use work in many areas: the entry for “Copy, Cut, Paste” says they work “exactly” as they do in Windows, but does not give the key equivalent of “Command” (the “clover” symbol) -C rather than Ctrl-C. (It was also only in working through some practice that I discovered that what the book describes as the “option” key is portrayed, in Mac menus, with a kind of bashed “T.” Yes, I suppose that, once you know this, it does look kind of like a railroad switchpoint, but it’s hardly intuitively obvious.)
There is a style issue in the written material of the book: the constant assertions that the Mac is better than everything, for anything. The first sentence of chapter one says “When you first turn on a Mac running OS X 10.6, an Apple logo greets you, soon followed by an animated, rotating `Please wait’ gear cursor–and then you’re in. No progress bar, no red tape.” Well, if the gear cursor isn’t an analogue of a progress bar, I don’t know what it’s supposed to be. Also, this statement is false: when you first turn on a Snow Leopard Mac, you have to go through some red tape and questions. This is only one example of many. This style may have some validity. After all, anyone who does not use a Mac comes across the same attitude in any Mac fanatic, and, even without the system chauvinism, a positive approach to teaching about the computer system is likely helpful to the novice user. However, the style should not get in the way of factual information.
I’m used to UNIX, and I’m already into Terminal, but it’s annoying to have that be the only way to access some of the material, given the repeated assertion that the Mac is so easy to use. Another little quirk today: yes, you can access Windows servers, but you can’t save anything to them. (I did find a way around that: create the file in Windows, open it on the Mac, copy information into it, and then save. Easy, right?)
High-bandwidth Digital Content Protection (HDCP) is a form of copyright protection developed by Intel. It is designed to prevent the copying of digital audio and video as it travels accross media interfaces such as HDMI, DisplayPort or Unified Display Interface (UDI).
The system is meant to stop HDCP-encrypted content from being played on devices that do not support HDCP or which have been modified to copy HDCP content. Before sending data, a transmitting device checks that the receiver is authorized to receive it. If so, the transmitter encrypts the data to prevent eavesdropping as it flows to the receiver.
Manufacturers who want to make a device that supports HDCP must obtain a license from Intel subsidiary Digital Content Protection, pay an annual fee, and submit to various conditions.
On 14th September 2010 the HDCP Master Key was somehow leaked, and published online in various sources. At present it is unknown how this Master Key was obtained, or whether Intel is doing any investigations as to how this happened. Intel has however threatened to sue anyone.
The leaked master key is used to create all the lower level keys that are stored within devices, so you can see what a nightmare this must be for Intel.
Intel have threatened to sue anyone that makes use of this key under intellectual property laws. However it will now only be a matter of time before we start seeing black market devices appearing.
If anyone’s at all interested though, you can find the key here.
Posted on July 24th, 2010 by p1
Filed under: Commentary, Corporate Security, DDoS, Gadgets, Insider Threat, malware, Networking, OPSEC, Physical Security, Privacy, Rootkits, Social Engineering | No Comments »
A recent Scientific American article does point out that is is getting increasingly difficult to keep our Trusted Computing Base sufficiently small.
For further information on this scenario, see: http://www.imdb.com/title/tt0436339/ 
We actually discussed this in the early days of virus research, and sporadically since. The random aspect (see Dell problems with bad chips) (the stories about malware on the boards is overblown, since the malware was simply stored in unused memory, rather than being in the BIOS or other boot ROM) is definitely a problem, but a deliberate attack is problematic. The issue lies with hundreds of thousands of hobbyists (as well as some of the hackers) who poke and prod at everything. True, the chance of discovering the attack is random, but so is the chance of keeping the attack undetected. It isn’t something that an attacker could rely upon.
Yes, these days there are thousands of components, being manufactured by hundreds of vendors. However, note various factors that need to be considered.
First of all, somebody has to make it. Most major chips, like CPUs, are a combined effort. Nobody would be able to make and manufacture a major chip all by themselves. And, in these days of tight margins and using every available scrap of chip “real estate,” someone would be bound to notice a section of the chip labeled “this space intentionally left blank.” The more people who are involved, the more likely someone is going to spill the beans, at the very least about an anomaly on the chip, whether or not they knew what it did. (Once the word is out that there is an anomaly, the lifespan of that secret is probably about three weeks.)
Secondly, there is the issue of the payload. What can you make it do? Remember, we are talking components, here. This means that, in order to make it do anything, you are generally going to have to rely on whatever else is in the device or system in which your chip has been embedded. You cannot assume that you will have access to communications, memory, disk space, or pretty much anything else, unless you are on the CPU. Even if you are on the CPU, you are going to be limited. Do you know what you are? Are you a computer? Smartphone? iPod? (If the last, you are out of luck, unless you want to try and drive the user slowly insane by refusing to play anything except Barry Manilow.) If you are a computer, do you know what operating system you are running? Do you know the format of any disk connected to you? The more you have to know how to deal with, the more programming has to be built into you, and remember that real estate limitation. Even if all you are going to do is shut down, you have to have access to communications, and you have to a) be able to watch all the traffic, and b) watch all the traffic, without degrading performance while doing so. (OK, true, it could just be a timer. That doesn’t allow the attacker a lot of control.)
Next, you have to get people to use your chips. That means that your chips have to be as cheap as, or cheaper than, the competition. And remember, you have to use up chip real estate in order to have your payload on the chip. That means that, for every 1% of chip space you use up for your programming, you lose 1% of manufacturing capacity. So you have to have deep pockets to fund this. Your chip also has to be at least as capable as the competition. It also has to be as reliable as the competition. You have to test that the payload you’ve put in place does not adversely affect performance, until you tell it to. And you have to test it in a variety of situations and applications. All the while making sure nobody finds out your little secret.
Next, you have to trigger your attack. The trigger can’t be something that could just happen randomly. And remember, traffic on the Internet, particularly with people streaming videos out there, can be pretty random. Also remember that there are hundreds of thousands of kids out there with nothing better to do than try to use their computers, smartphones, music players, radio controlled cars, and blenders in exactly the way they aren’t supposed to. And several thousand who, as soon as something odd happens, start trying to figure out why.
Bad hardware definitely is a threat. But the largest part of that threat is simply the fact that cheap manufacturers are taking shortcuts and building unreliable components. If I was an attacker, I would definitely be able to find easier ways to mess up the infrastructure than by trying to create attack chips.
 Get it some night when you can borrow it, for free, from your local library DVD collection. On an evening when you don’t want to think too much. Or at all. WARNING: contains jokes that six year olds, and most guys, find funny.
It’s easy to spoof caller-ID with some VoIP systems. There are a few Websites that specifically allow it. It’s a little harder, but geekier, to spoof or overflow caller-ID with a simple Bell 212A modem: it’s transmitted with that tech between the first and second rings of the phone. (Since most people have caller-ID these days, many telcos don’t play you the first ring. Since we don’t have caller-ID, we often get accused of answering the phone before it rings.) (Of course, the rings you hear on the calling side aren’t necessarily the rings heard on the other end, but …)
Apparently AT&T allows immediate access to voicemail on the basis of caller-ID.
Apparently, with Android phones, it’s also gotten even easier to spoof caller-ID.
Like freedom of the press, ultimately, net neutrality is going to be reserved for those who own one.
Well, we’re getting closer.
Consider the case of cell/mobile phones. The device is basically useless for communications, unless you have service through a provider. They have the cell towers, and link through to the public telephone network. You have to pay them, and they get to manage how calls are made.
(Just as a side issue, the more people who are subscribers in a given location, the worse chance you have of getting to make a call. You get to be a victim of their [the telco's] success.)
Now, consider. Cell phones are getting smarter all the time. Already they can connect with (and via) wifi. And we can build applications for them. Recently I didn’t have Internet access for my laptop, and someone with an Android phone was able to set up a wireless access point for me, through his phone, and give me that connection.
OK, lets extend the routing a bit. We’ve developed a lot of good routing protocols from building the Internet. Let’s extend those to handle hops from phone to phone. And, using voice over IP, and a few other technologies, pretty soon we can make calls without hitting a cell tower.
(And, bearing in mind technologies such as CDMA2000, note that, in opposition to the cell tower contention model, the *more* cell phones we put into an area under this model, the *better* our coverage and bandwidth is going to be. The closer the devices are to each other, the faster [more bandwidth] they can talk to each other.)
Latency may be a problem. Security (in terms of confidentiality) will definitely need to be addressed. Long distance transmission will be a concern (although it’s rather amazing how many ideas start popping up as soon as those issues are raised).
Basically, any form of communication will follow from the same model. With a bunch of cell phones where your only cost is the initial cost of the phone itself (no subscription, no usage cost, no long distance charges) it won’t take long for landline phones to be phased out. Data communications, for any “store and forward” model (basically, anything other than streaming) will be even more efficient. Maybe there will still be a place for the telcos: if you want faster service for real time streaming of content. Of course, they’d have to be willing to set the price point for unlimited data low enough to be attractive …
However, we would now seem to be nearer the end than the beginning. It’s mostly a matter of which platform to start with. Google has demonstrated that it *can* term off applications, but, in this case, why should it? Apple might want to jump in on the ground floor, but they turn off a lot more apps, and, in any case, it probably isn’t best to start with a phone where you have to hold your tongue (or your hand) just right to get it to work.
There is no possible way this could potentially go wrong, right?
Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?
It’s a public/private partnership, right? So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it? Like, say, trying to re-introduce the TCPI?
And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?
The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?
There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?
By the way, in non-Sonne-erous G8/20 news, our government(s) have spent a billions dollars on security for a couple of days of meetings. Even given the degraded value of the American billion, that’s a lot of money.
Part of it was used to buy sound cannons. (The police don’t like you saying that: they prefer the term “long range sonic control devices.”) These sound cannons generate noise at 130 decibels, which the civil liberties folks are concerned will damage human hearing.
That’s the same level of noise a vuvuzela makes.
So, look, why didn’t we save the billion dollars, go down to Canadian Tire, and, for a hundred bucks (possibly in Canadian Tire money) equip the entire riot squad with vuvuzelas?
I know that this topic has been discussed before, but I am writing this one as a reminder to all the CISO’s out there that allow people to connect their phones to your corporate PC’s.
I do agree that in their default configuration iPhones aren’t exactly the most dangerous of devices to have on your network, however if you take the step to Jailbreak your iPhone, it opens up a whole new playing field.
After Jailbreaking my phone, the first things that I installed were nmap, metasploit, tcpdump and an application to enable my phone as a USB drive. This allowed me to gain access to a corporate network via wireless on my phone, and exploit a windows host in about 10 minutes, all from sitting in the lobby.
Also with a bit of scripting/or paid for applications, I was able to plug my iPhone into a PC and copy everything that was stored in the My Documents folder for that user. Some of this was company confidential data, some of it was personal photos and banking details.
Don’t get me wrong, I love my iPhone, but I believe that corporations should really take smart phones as a serious security risk, and not just write them off as phones. The age of a cell phone being just a cell phone is long gone now, and phones are easy to get into places and no-one bats an eye lid if you spend 10 minutes typing on your phone.
Next time you see someone sitting in a lobby working on their phone, remember this article, and ask yourself, what defenses do you have in place to protect against this threat?
After months of intermittent attempts and research, I finally have a connection between two of my laptops, and an Internet connection to the one that is not physically connected to the wired LAN.
(Well, perhaps I might qualify that.&nbsp; I appear to have a connection to the Internet, and I seem to have been successful at viewing a couple of Websites, and sending one piece of email.&nbsp; It’s pig slow, and at the moment the mailer is trying to download some email.&nbsp; It’s made enough of a connection to know that some email is there, but actually retrieving the email is taking enough time that I have been able to start to prep this posting in a browser window while I’m waiting.&nbsp; I type very slowly, and, as of the end of this paragraph, it hasn’t yet successfully downloaded the second of seven messages.)
(The speed of the connection [although the computer says the connection is "Very Good"] may be due to the fact that I’m using&nbsp; WEP with 104, rather than 40, bit key.&nbsp; Don’t know how much difference it would make.&nbsp; At the moment, having only just established the connection, I’m not about to mess with the settings to find out.)
However, as happy as I am to have the connection, the simple fact of it is not important enough to warrant a blog post.&nbsp; No, the real point is all the trouble I encountered trying to find out how to make it work.&nbsp; Following on from the complexity of any computing that I wrote about earlier.
As usual, I made my own life more difficult.&nbsp; If all I wanted was a simple ad-hoc wireless network, that could be had for the asking.&nbsp; Well, sort of.&nbsp; A simple wireless network doesn’t do very much, unless you can share information from the drives, or share an Internet connection.&nbsp; And that seems to be extra.
(Maybe.&nbsp; At one point in the process, I had left one of the test wireless networks “on.”&nbsp; And in one of my classes, one of my students managed to connect to it and get an Internet connection from the wired connection I had.&nbsp; Random successes aren’t terribly useful, unless you can repeat them.)
Anyway, I have a wired network at home.&nbsp; I have sharing enabled, so that I can copy materials from one machine to another.&nbsp; At the moment, all of them run Windows XP.&nbsp; (Yeah, I know.&nbsp; I’ll get around to Linux sometime …)&nbsp; I have (now) multiple laptops, and have to take at least one of them on the road for teaching.&nbsp; And, of course, the mobile machines have to connect to all kinds of wired and wireless connections on the road.
Of course, the easy way would be to go to London Drugs and get a wireless router, connect it to the wired LAN, and fill in a few simple settings.&nbsp; It’d probably take no more than a couple of hours, from beginning to end.&nbsp; But I wouldn’t learn much about ad-hoc networking that way, and I’ve been getting more interested in it, particularly as a security concern, as I have been seeing that “computer-to-computer network” legend show up in more and more places.&nbsp; (Especially with “Free Internet Connection!” as the network name.)
So, having a spare laptop (since, on a recent teaching trip, it decided to go spare on me), I figured it would be easy to set up a connection between that and the new one.
Actually, it was on the trip that I wanted to start the process.&nbsp; There was nothing wrong with the old laptop (except that it was a Toshiba, and I’ve had two Toshibas in a row, and I will never again by anything made by Toshiba since they’ve given me nothing but grief for eight years) except that the power supply was becoming unreliable.&nbsp; I bought a cheap (and non-Toshiba) netbook and asked for advice about connecting them via ad-hoc network in order to transfer the necessary files.
Well, lots of advice, but nothing actually worked, and I fell back on using the Passport external drive my wonderful daughters gave me that has been so useful in so many situations.&nbsp; But it doesn’t do networking.
The friends gave me some starting points in terms of places to look for advice.&nbsp; Microsoft, naturally.&nbsp; There is a wonderful page at http://www.microsoft.com/windowsxp/using/networking/expert/bowman_02april08.mspx which provides clear explanations.&nbsp; Only a couple of problems: it was written in 2002, so the dialogue boxes have changed.&nbsp; This piece does talk about sharing an Internet connection, but it doesn’t mention the need to modify the default IP addresses, since everything seems to want to use 192.168.0.1 as a base, and that leads to conflicts.&nbsp; Bottom line: it doesn’t work.
Microsoft updated the information in 2006 at http://www.microsoft.com/windowsxp/using/networking/setup/adhoc.mspx and the dialogue boxes are closer to what you’ll actually see these days.&nbsp; After running through that one I tested it out, only to find that the network never does show up on “Available Wireless Networks.”&nbsp; I’m not sure if this is because, if you choose WEP, and tell it not to broadcast the key, it keeps it hidden.&nbsp; I did manage to connect to the network, and even seemed to be able to see other computers drives, and see something of the Internet, but all of the connections disappeared over time.&nbsp; Again, this page says to use Internet Connection Sharing, but doesn’t provide the necessary detail to make it work.
All kinds of pages are out there, if you do a Web search, seemingly based on this same, limited, misinformation.&nbsp; At http://www.home-network-help.com/ad-hoc-wireless-network.html the author seems to have given some thought to the issue of IP addresses, but not much.&nbsp; http://www.home-network-help.com/ics-host-computer.html goes into a bit more detail on the IP addresses, but not enough, particularly in terms of the entries that have to be made in various places on various machines.
Finding all the places to make those entries is a trip in and of itself.&nbsp; The Help and Support Center for XP Home Edition is no help.&nbsp; At one point I was afraid that the multitude of entries for the various networks I’ve connected to in hotels, airports, and seminar hosting sites had something to do with it, so I went and deleted all of those “Preferred networks” I had accumulated over the years.&nbsp; (Did you know that they were all still there?)
Lots of people are willing, and more than willing, to provide the benefit of their lack of experience.&nbsp; I say this, since so many of the entries don’t actually work.&nbsp; http://www.ehow.com/how_6108229_make-wirelss-internet-_ad_hoc-wireless_.html&nbsp; Terse, doesn’t work.&nbsp; http://www.ehow.com/how_5167281_create-ad-hoc-wifi-network.html&nbsp; Slight tech detail, doesn’t cover sharing drive or Internet connection, doesn’t explain how to make new wireless network visible to “View available wireless networks.”&nbsp; http://www.ehow.com/how_5154137_create-ad-hoc-network.html&nbsp; A touch more detail than above (5167281), mentions need to share Internet connection, mentions a dialogue button that doesn’t exist in the XP explanation.&nbsp; http://www.ehow.com/how_5946176_set-hoc-network-windows-xp.html&nbsp; Some detail on setting up the network, doesn’t completely work, nothing on sharing.&nbsp; http://www.ehow.com/way_5492555_ad-hoc-network-tutorial.html&nbsp; Some detail on setting up the network, doesn’t completely work, nothing on sharing.&nbsp; http://www.ehow.com/how_5670567_set-ad-hoc-wireless-network.html&nbsp; Some detail on setting up the network, doesn’t completely work, nothing on sharing, does do XP and Vista.
Some of the advice is contradictory.&nbsp; For example, I mentioned I was using WEP.&nbsp; This is because some of the sites, such as http://www.hardwaresecrets.com/article/418 and http://www.tomshardware.com/forum/28615-42-networking-security-problem suggest that WPA and WPA2 can’t be used if the “host” for your ad-hoc network is running Windows XP (which mine is).&nbsp; Of course, that might be old news, which might have been superceded by intervening upgrades.&nbsp; But, with this level of information, how am I supposed to tell?
We are awash in a sea of information.&nbsp; Except that some of the information is misinformative.&nbsp; As John Lawton stated, the irony of the information Age is that it has given new respectability to uninformed opinion.&nbsp; This can have rather significant consequences.&nbsp; A recent CBC story notes that this may play into the May 6 stock market mini-meltdown.
So far, the best clue I received was from http://www.wi-fiplanet.com/tutorials/article.php/3822651&nbsp; I had frequently seen the “Bridge connections” option, but I somehow never thought to have two networks “selected” when I tried it.&nbsp; Even then, I might have missed the opportunity.&nbsp; I got the usual error message, but it suddenly dawned on me that ICS might conflict with it.&nbsp; (Given that everybody else had been telling me to turn ICS on.)&nbsp; So, I turned ICS off, and, sure enough, Bridge connections was happy to do just that.
I still have no clue what has been set, and where …
Over the years I’ve had to learn a lot about computers. I’ve written device drivers for the All-in-One system under Vax/VMS. I know what to do with MS-DOS’s AUTOEXEC.BAT and CONFIG.SYS files. I’ve learned more word processors than I can remember the names of. I was using UNIX when that was still a big deal. Because of some some research that was important in the early days of computer viruses I know a question that will stump any computer forensics expert on the witness stand.
I’m a little afraid of my new netbook. Within a few months I’ll need to buy a new desktop, and I know I’m going to be more afraid of it.
In the DOS days, I knew pretty much everything that was going on in it. I knew the hardware, and the system files. I even had a bunch of tools that would let me see the raw disk and memory. It was tedious to do so, but it was possible.
Even when Windows 3 and 95 came out, I understood that this was simply a new interface. I could still examine the system, and make sure everything was as it should be. I could have confidence and assurance in the computer. True, there wasn’t any serious protection on it, but, since I knew the full system, I could examine it regularly and make sure that nothing untoward was happening.
Then came Windows NT. Extra protection on the system, but suddenly every time you turned the system on, 400 files (a number of them system files) got modified. Change detection lost its security.
Then the later members of that family started adding ties into applications and back again. And with Windows XP, for the first time, when a friend’s computer got infected, the only solution was to re-install the system.
Complexity is the enemy of security. However, this goes deeper. These days we have huge numbers of people using devices that are, as far as they are concerned, magic. Don’t get me wrong. I think magic is a lot of fun. It’s just that magic seems to be defined as inherently unknowable, and these users are not only content with, but actually proud of, their ignorance.
This is dangerous. When you assume that you cannot know, that seems to absolve you of any responsibility for even trying. You punch the icons, and do things with no understanding of the consequences.
At the moment, I am trying to set up an ad-hoc wireless network between some of my machines. I’m not having much luck. I’ve researched the process, and had suggestions from friends. I’ve been working at it, off and on, for months. It still isn’t working. I can’t find the information I need, either on the process, or in regard to the actual settings on my machines.
Ignorance isn’t bliss. It’s dangerous. If I, as a computer, communications, and security specialist of decades of standing, can’t get a simple (well, not quite that simple) network set up, how can we give advice to the novice users of the world on how to keep themswelves safe?
What with twenty years experience in reviewing AV software, I figured I’d better try it out.
It’s not altogether terrible. The fact that it’s free, and from Microsoft (and therefore promoted), might reduce the total level of infections, and that would be a good thing.
But even for free software, and from Microsoft, it’s pretty weird.
When I installed it, I did a “quick” scan.
That ran for over an hour on a machine with a drive that’s got about 70 Gb of material on it, mostly not programs. At that point I hadn’t found out that you can exclude directories (more on that later), so it found my zoo. It deleted nine copies of Sircam.
Lemme tell ya ’bout my zoo. It’s got over 1500 files in it. There are a lot of duplicate files (hence the nine copies of Sircam), and there are files in there that are not malware. There are files which have had the executable file extensions changed. But there are a great number of common, executable, dangerous pieces of malware in there, and the only thing MSE found was nine copies of Sircam.
(Which it deleted. Without asking. Personally, for me, that’s annoying. It means I have to repopulate my zoo from backups. But for most users, that’s probably a good thing.)
Now, when I went to repopulate my zoo, I, of course, opened the zoo directory with Windows Explorer. And all kinds of bells and whistles went off. As soon as I “looked” at the directory, the real-time component of MSE found more than the quick scan did. That probably means the real-time scanner is fairly decent. (In my situation it’s annoying, so I turned it off. MSE is now annoyed at me, and continues to be annoyed, with big red flags on my task bar.)
MSE has four alert levels to categorize what it finds, and you have some options for setting the default actions. The alert levels are severe (options: “Recommended action,” “Remove,” and “Quarantine”), high (options: “Recommended action,” “Remove,” and “Quarantine”), medium (options: “Recommended action,” “Remove,” “Quarantine,” and “Allow”), and low (options: “Recommended action,” “Remove,” “Quarantine,” and “Allow”). Initially, everything is set at “Recommended action.” I turned everything down to the lowest possible settings: I want information, not strip mining. However, for most people it would seem to be reasonable to keep it at the default action, which seems to be removal for everything.
I don’t know where it puts the quarantined stuff. It does have a directory at C:\Documents and Settings\All Users\Application Data\Microsoft Security Essentials, but no quarantined material appears to be there.
(I did try to find out more. It does have help functions. If you click on the “Help” button, it sends you to this site. However, if you click on the link to explain the actions and alert levels, it sends you to this site. If you examine those two URLs, they are different. If you click on them, you go to the same place. At that location, you can get some pages that offer you marketing bumpf, or watch a few videos. There isn’t much help.)
You can exclude specific files and locations. Personally, I find that extremely useful, and the only reason that I’d continue using MSE. It does seem to work: I excluded my zoo before I did a full scan, and none of my zoo disappeared when I did the full scan. However, for most users, the simple existence of that option could signal a loophole. If I was a blackhat, first thing I’d do is find out how to exclude myself from the scanner. (There is also an option to exclude certain file types.)
So I did a full scan. That took over eight hours. I don’t know exactly how long it took, I finally had to give up and leave it running. MSE doesn’t report how long it took to do a scan, it only reports what it found. (I suspect the total run was around ten or eleven hours. MSE reports that a full scan can take up to an hour.)
While MSE is running it really bogs down the machine. According to task manager it doesn’t take up much in the way of machine cycles, but the computer sure isn’t responsive while it’s on.
When I came back and found it had finished, the first thing it wanted me to do was send a bunch of suspect files to Microsoft. The files were all from my email. On the plus side, the files were all messages that reported suspect malware or Websites, so it’s possible that we could say MSE is doing a good job in scanning files and examining archives. (On the other hand, every single message was from Sunbelt Software. This could be coincidence, but it is also a fact that Sunbelt makes competing AV software, and was formerly associated with a company that Microsoft bought in its race to produce AV and anti-spyware components.)
Then I started to go through what Microsoft said it found, in order to determine what I had lost.
The first item on the list was rated severe. Apparently I had failed to notice six copies of the EICAR test file on my machine.
Excuse me? The EICAR test file? A severe threat? Microsoft, you have got to be kidding. And the joke is not funny.
The EICAR test file is a test file. If anyone doesn’t know what it is, read about it at EICAR, or at Wikipedia if you don’t trust EICAR. It’s harmless. Yes, a compatible scanner will report it, but only to show that your scanner is, in fact, working.
It shouldn’t delete or quarantine all copies it finds on the machine.
After some considerable work, I did find them. They seemed to be the “suspect” messages that Microsoft wanted. And when I tried to recover them, I found that MSE had not quarantined them: they were left in place. So, at the very least, at times MSE lies to you.
(I guess I’d better add my email directory to places for MSE not to scan.)
MSE quarantined some old DOS utilities. It quarantined a bunch of old virus simulators (the ones that show you screen displays, not actual infectors). (Called them weird names, too.)
MSE quarantined Gibson Research‘s DCOMbob.exe. This is a tool for making sure that DCOM is disabled on your machine. Since DCOM was the vector for the Blaster worm (among others), and is really hard to turn off under XP, I find this rather dangerous.
OK, final word is that I can use it. I’ll want to protect certain areas before I do, but that shouldn’t be too much of a concern for most users.
You might want to make sure Microsoft isn’t reading your email …
This system has had some discussion in the forensics world over the past few days. Here’s an extract from Science Daily:
“Computers have made it virtually impossible to leave the past behind. College Facebook posts or pictures can resurface during a job interview. A lost cell phone can expose personal photos or text messages. A legal investigation can subpoena the entire contents of a home or work computer. The University of Washington has developed a way to make such information expire. After a set time period, electronic communications such as e-mail, Facebook posts and chat messages would automatically self-destruct, becoming irretrievable from all Web sites, inboxes, outboxes, backup sites and home computers. Not even the sender could retrieve them.
“The team of UW computer scientists developed a prototype system called Vanish that can place a time limit on text uploaded to any Web service through a Web browser.
[Perhaps a bit narrower focus than the original promise, but it is a prototype - rms]
“After a set time text written using Vanish will, in essence, self-destruct. The Vanish prototype washes away data using the natural turnover, called “churn,” on large file-sharing systems known as peer-to-peer networks. For each message that it sends, Vanish creates a secret key, which it never reveals to the user, and then encrypts the message with that key. It then divides the key into dozens of pieces and sprinkles those pieces on random computers that belong to worldwide file-sharing networks. The file-sharing system constantly changes as computers join or leave the network, meaning that over time parts of the key become permanently inaccessible. Once enough key parts are lost, the original message can no longer be deciphered.”
However, given the promise to clean up social networking sites, and as I started to read the paper, an immediate problem occurred to me. And, lo and hehold, the authors admit it:
“We therefore focus our threat model and subsequent analyses on attackers who wish to compromise data privacy. Two key properties of our threat model are:
1. Trusted data owners. Users with legitimate access to the same VDOs trust each other.
2. Retroactive attacks on privacy. Attackers do not know which VDOs they wish to access until after the VDOs expire.
The former aspect of the threat model is straightforward, and in fact is a shared assumption with traditional encryption schemes: it would be impossible for our system to protect against a user who chooses to leak or permanently preserve the cleartext contents of a VDO-encapsulated file through out-of-band means. For example, if Ann sends Carla a VDO-encapsulated email, Ann must trust Carla not to print and store a hard-copy of the email in cleartext.”
So, this system works perfectly. If you only communicate with people you trust (both in terms of intent, and competence), and who only use the system properly, and never use any of the information in any program that is not part of the system, it’s completely secure.
How often have we heard that said?
The default to privacy aspect is interesting, and the automatic transparency for the user as well, but this simply moves the problem one step back, as it were. In terms of utility to social networking, the social networks would have to be completely rewritten to adher to the system, and even then it would be pretty much impossible to ensure that nobody would have the ability to scrape data and keep or publish it elsewhere.
(Plus, the data is still there, and so is Moore’s Law …)
Dinosaur that I am, it never occurred to me that long URLs were a major problem. Sure, I’d gotten lots that were broken, particularly after going through Web-based mailing lists. But you could generally put them back together again with a few mouse clicks. So what?
So the fact that there were actually sites that would allow you to proactively pre-empt the problem, by shortening the URL, came as a surprise. What was even more of a surprise was that there were lots of them. Go ahead. Do a search on “+shorten +url” and see what you get. Thousands. http://bit.ly/ http://tubeurl.com/ http://www.shortenurl.com/index.php http://urlzoom.org/ http://ayuurl.com/ http://urlsnip.com/ http://url.co.uk/ http://metamark.net/ http://8ez.com/ http://notlong.com/ http://shorten.ws/ http://myurl.si/ http://dwindle.me/ http://nuurl.us/ http://myurlpro.com/ http://2url.org/ http://tiny.cc/
I would not, by the way, advise visiting that last. .cc is a domain used by those on the dark side. In fact, I wouldn’t recommend visiting many of those: I have no idea where they came from, except that a search pops them up. Which is part of the point.
Are URL shorteners a good thing? Joshua Schachter says no. Therefore, in opposition, Ben Parr says yes. There are legitimate points to be made on both sides. They add complexity to the process. (Shorteners aren’t shorteners: they are redirectors.) They make it easier to tweet (and marginally easier to email). They disguise spam. Some of the sites give you link use data. They create another failure point. They hide the fact that most Twitter users are, in fact, posting exactly the same link as 49,000 other Twitter users.
URL shorteners/redirectors are going to be used: that is a given. Now that they here, they are not going away. Those of pure heart and altruistic (or, at least, monetary only) motive will provide the services, have reasonable respect for privacy, and add functions such as those providing link use data to the originator (and, possibly, user). A number of the sites will be set up to install malware on the originator’s machine, to preferentially try to break the Websites identified, to mine and cross-corelate URL and use data, and to redirect users to malicious sites.
If you are going to use them (and you are, I can tell), then choose wisely, grasshopper. There are lots to choose from. Choose sites that offer preview capabilities. If someone doesn’t use the preview options, you can still add them. http://tinyurl.com/a-short-url-that-expands is the same as http://preview.tinyurl.com/a-short-url-that-expands : you just have to add the “preview.” part. http://is.gd/ is even easier: just add a hyphen to the end of the shortened URL. I’m hoping that one of the sites will start checking the database for already existing links, and returning the same “short form”: it’d make it easier to identify all the identical tweets. (With the increasing use of the sites, it will also ensure that the hash space doesn’t expand too quickly, which would be to the advantage of the shortening sites.)