The decline of credit cards

At the BC ISMS User Group meeting last week we were concentrating on the relationship between the ISO 27000 family of standards, and the PCI-DSS (Payment Card Industry Data Security Standards, usually just known as PCI).  PCI-DSS is of growing concern for pretty much anyone who does online retail commerce (and, come to that, anyone who does any kind of commerce that involves any use of a credit card).

It kind of crystalized some ideas that I’ve been mulling over recently.

Over the past year or so, I’ve been examining some situations for small charitable organizations, as well as some small businesses.  Many would like to sell subscriptions, raffle tickets, accept donations, or sell small, specialty items over the net.  However, I’ve had to consistently advise them that they do not want to get involved with PCI: it’s way too much work for a small company.  At the same time, most small Web hosting providers don’t want to get involved in that, either.

The unintended end result consequence of PCI is that small entities simply cannot afford to be involved with credit cards anymore.  (It’s kind of too bad that, a decade ago, MasterCard and Visa got within about a month of releasing SET [Secure Electronic Transactions] and then quit.  It probably would have been perfect for this situation.)

Somewhat ironically, PCI means a big boost in business for PayPal.  It’s fairly easy to get a PayPal account, and then PayPal can accept credit cards (and handle the PCI compliance), and then the small retailer can get paid through a PayPal account.  So far PayPal has not created anything like PCI for its users (which is, again, rather ironic given the much wilder environment in which it operates, and the enormous effort phishing spammers make in trying to access PayPal accounts.)  (The PayPal Website is long on assurances in terms of how PayPal secures information, and very short on details.)

This is not to say that credit cards are dead.  After all, most PayPal purchases will actually be made with credit cards: it’s just that PayPal will handle the actual credit card transaction.  Even radical new technologies for mobile payments tend to be nothing more that credit card chips embedded in something else.

These musings, though, did give a bit more urgency to an article on F-commerce: the fact that a lot of commercial and retail activity is starting to happen on Facebook.  Online retail transactions aren’t new.  They aren’t even new in terms of social networks or a type of currency created within an online system.  Online game systems have been dealing with the issue for some time, and blackhats have been stealing such credits and even using them to launder money for a number of years now.  However, the sheer size of Facebook (third largest “national population” in the world), and the fact that that entire population is (by selection) quite affluent means that the new Facebook credit currency may very quickly balloon to an enormous size in relation to other currencies.  (We will leave aside, for the moment, the fact that I personally consider Facebook to be tremendously divisive to the Internet as a whole.  And that Facebook does not have the best record in terms of security and privacy.)  Creation of wealth, ex nihilo, on a very, very large scale.  What are the implications of that?

Share

Shaw and Spamhaus

I seem to be back on the air.

A few observations over this whole affair:

(Sorry, I’ve not had time to put these in particular order, and some of the point may duplicate or relate …)

1) I still have absolutely no idea why Shaw cut me off.  They keep blaming Spamhaus, but the only links they offer me as evidence clearly show that there is no “bad reputation” in the specific IP address that I am currently using, only a policy listing showing one of Shaw’s address ranges.

2) I got absolutely no warning from Shaw, and no notice after the fact.

3) Shaw’s spam filtering is for the birds.  Today I got two messages flagged as spam, for no clear reason I could see.  They were from a publisher, asking how to send me a book for review.  The only possible reason I could see was that the publisher copied three of my email addresses on the same message.  A lot of people do that, but it usually doesn’t trip the spam filter.  Today it did.  (Someone else with Shaw “service” tried to send out an announcement to a group.  Since he didn’t have a mailing list server, he just sent out a bunch of messages.  Apparently that got *his* account flagged as spamming.)  I also got the usually round of messages from security mailing lists tagged as spam: Shaw sure has something against security.  And at least one 419 scam got through unflagged today, despite being like just about every other 419 in the world.  (Oddly, during this period I’ve noted a slight uptick in 419s and phishing in general.)

4) Through this episode I had contact with Shaw via email, phone, “live chat,” and Twitter.  I follow ShawInfo and Shawhelp on Twitter.  On Twitter, I was told to send them a direct message (DM).  I had, in fact, tried to do that, but Shaw doesn’t accept direct messages by default.  (Since I pointed that out to them, they now, apparently accept them from me.)  They sent me public messages on Twitter, and I replied in kind.  Through the Twitter account they also informed me that error 554 is “poor reputation” and is caused by sending too many emails.  They didn’t say how many is too many.  (Testing by someone else indicated something on the order of 50-100 per hour, and I’ve never done anything near that scale.)

5) The “live chat” function installs some software on your (the client) machine.  At least two of the pieces of software failed the digital signature verification …

6) The “information” I got from Shaw was limited.  The first (phone) support call directed me to http://www.senderbase.org/senderbase_queries/detailip?search_string=70.79.166.169  If you read the page, the information is almost entirely about the “network” with only a few (and not informative) pieces about the IP address itself.  (I did, separately, confirm that this was my IP address.)  The bulk of the page is a report on addresses that aren’t even in the same range as I am.  About halfway down the right hand side of the page is “DNS-based blocklists.”  If you click the “[Show/Hide all]” link you’ll notice that four out of five think I’m OK.  If you click on the remaining one, you go to http://www.spamhaus.org/query/bl?ip=70.79.166.169  At the moment, it shows that I’m completely OK.  At the time I was dealing with Shaw, it showed that it’s not in the SpamHaus Block List (SBL) or the XBL.  It was in the PBL (Policy Block List), but only as a range known to be allowed to do open sending.  In other words, there is nothing wrong with my IP address: Shaw is in the poop for allowing (other) people to send spam.

7) The second (live chat) support call sent me to http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a70.79.166.169+  Again, this page showed a single negative entry, and a whole page of positive reports.  The single negative entry, if pursued, went to the same Spamhaus report as detailed above.

8) At the time, both initial pages, if followed through in terms of details, led to http://www.spamhaus.org/pbl/query/PBL164253 giving, as the reason, that “This IP range has been identified by Spamhaus as not meeting our policy for IPs permitted to deliver unauthenticated ‘direct-to-mx’ email to PBL users.”  Again, Shaw’s problem, not mine.  However, that page has a link to allow you to try and have an address removed.  However, it says that the “Removal Procedure” is only to be used “If you are not using normal email software but instead are running a mail server and you are the owner of a Static IP address in the range 70.79.164.0/22 and you have a legitimate reason for operating a mail server on this IP, you can automatically remove (suppress) your static IP address from the PBL database.”  Nevertheless, I did explore the link on that page, which led to http://www.spamhaus.org/pbl/removal/  Again, there you are told “You should only remove an IP address from the PBL if (A) the IP address is Static and has proper Reverse DNS assigned to your mail server, and (B) if you have a specific technical reason for needing to run a ‘direct-to-MX’ email service, such as a mail server appliance, off the Static IP address. In all other cases you should NOT remove an IP address from the PBL.”  This did not refer to my situation.  Unfortunately, THESE TWO PAGES ARE INCORRECT.  If you do proceed beyond that page, you get to http://www.spamhaus.org/pbl/removal/form  This page does allow you to submit a removal request for a dynamic IP address, and, in fact, defaults to dynamic in the form.  It was only on the last part of the second call, when the Shaw tech gave me this specific address, that I found this out.  For this I really have to blame Spamhaus.

9) In trying to determine if, by some weird mischance, my computer had become infected, I used two AV scanners, one spyware scanner, and two rootkit scanners.  (All results negative, although the Sophos rootkit scanner could have been a bit clearer about what it had “found.”)  Of course, I’ve been in the field for over two decades.  How would the average user (or even a security professional in a non-malware field) even know that there are different types of scanners?  (Let alone the non-signature based tools.)

Share

“Extrusion Detection”, Richard Bejtlich

BKEXTDET.RVW   20101023

“Extrusion Detection”, Richard Bejtlich, 2006, 0-321-34996-2,
U$49.99/C$69.99
%A   Richard Bejtlich www.taosecurity.com taosecurity.blogspot.com
%C   P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario  M3C 2T8
%D   2006
%G   0-321-34996-2
%I   Addison-Wesley Publishing Co.
%O   U$49.99/C$69.99 416-447-5101 800-822-6339 bkexpress@aw.com
%O  http://www.amazon.com/exec/obidos/ASIN/0321349962/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0321349962/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0321349962/robsladesin03-20
%O   Audience a+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   385 p.
%T   “Extrusion Detection:Security Monitoring for Internal Intrusions”

According to the preface, this book explains the use of extrusion detection (related to egress scanning), to detect intruders who are using client-side attacks to enter or work within your network.   The audience is intended to be architects, engineers, analysts, operators and managers with an intermediate to advanced knowledge of network security.  Background for readers should include knowledge of scripting, network attack tools and controls, basic system administration, TCP/IP, as well as management and policy.  (It should also be understood that those who will get the most out of the text should know not only the concepts of TCP/IP, but advanced level details of packet and log structures.)  Bejtlich notes that he is not explicitly addressing malware or phishing, and provides references for those areas.  (It appears that the work is not directed at information which might detect insider attacks.)

Part one is about detecting and controlling intrusions.  Chapter one reviews network security monitoring, with a basic introduction to security (brief but clear), and then gives an overview of monitoring and listing of some tools.  Defensible network architecture, in chapter two, provides lucid explanations of the basics, but the later sections delve deeply into packets, scripts and configurations.  Managers will understand the fundmental points being made, but pages of the material will be impenetrable unless you have serious hands-on experience with traffic analysis.  Extrusion detection itself is illustrated with intelligible concepts and examples (and a useful survey of the literature) in chapter three.   Chapter four examines both hardware and software instruments for viewing enterprise network traffic.  Useful but limited instances of layer three network access controls are reviewed in chapter five.

Part two addresses network security operations.  Chapter six delves into traffic threat assessment, and, oddly, at this point explains the details of logs, packets, and sessions clearly and in more detail.   A decent outline of the advance planning and basic concepts necessary for network incident response is detailed in chapter seven (although the material is generic and has limited relation to the rest of the content of the book).  Network forensics gets an excellent overview in chapter eight: not just technical points, but stressing the importance of documentation and transparent procedures.

Part three turns to internal intrusions.  Chapter nine is a case study of a traffic threat assessment.  It is, somewhat of necessity, dependent upon detailed examination of logs, but the material demands an advanced background in packet analysis.  The (somewhat outdated) use of IRC channels in botnet command and control is reviewed in chapter ten.

Bejtlich’s prose is clear, informative, and even has touches of humour.  The content is well-organized.  (There is a tendency to use idiosyncratic acronyms, sometimes before they’ve been expanded or defined.)  This work is demanding, particularly for those still at the intermediate level, but does examine an area of security which does not get sufficient attention.

copyright, Robert M. Slade   2010     BKEXTDET.RVW   20101023

Share

REVIEW: “Codes, Ciphers and Secret Writing”, Martin Gardner

BKCOCISW.RVW   20101229

“Codes, Ciphers and Secret Writing”, Martin Gardner, 1972,
0-486-24761-9, U$4.95/C$7.50
%A   Martin Gardner
%C   31 East 2nd St., Mineola, NY  11501
%D   1972
%G   0-486-24761-9
%I   Dover Publications
%O   U$4.95/C$7.50 www.DoverPublications.com
%O  http://www.amazon.com/exec/obidos/ASIN/0486247619/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0486247619/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0486247619/robsladesin03-20
%O   Audience n- Tech 1 Writing 2 (see revfaq.htm for explanation)
%P   96 p.
%T   “Codes, Ciphers and Secret Writing”

This brief pamphlet outlines some of the simple permutation and substitution ciphers that have been used over time.  The emphasis is on the clever little tricks that go into making ciphers slightly harder to crack.  None of the algorithms are terribly sophisticated, and exercises are given at the end of each chapter.  Instructions are given for decrypting some of the ciphers, even if you don’t know the key.

Two additional chapters address related topics.  The first deals with various forms of secret writing, such as invisible inks, or steganographic messages.  The last chapter briefly examines the problem of creating messages that unknown people, with unknown languages, may be able to solve (such as sending messages to the stars).

None of the material is strenuous, but this may be a nice start before moving on to a work such as Gaines “Cryptanalysis” (cf. BKCRPTAN.RVW).

copyright, Robert M. Slade   2010     BKCOCISW.RVW   20101229

Share

New computers – disappointments

Crazy busy, this time of year, isn’t it?  That’s why I haven’t gotten back to this until now.

(Mind you, last Sunday, at church, the kids put on this play, the point of which was that the “busy” parts of Christmas often aren’t the most important aspects.  The tagline to the play was that you should be busy about the right things, not the wrong ones.  And we’ve mostly been busy with the twins.

For example, #2 Daughter was heavily involved in the local Atom hockey tournament, since #3 Grandson was in one of the teams.  So, we pulled extra babysitting time while she had to be running things, and he *didn’t* have to be there.

The famous Coquitlam Atom Boom Boom Puck Jam Hockey tournament was won, Tuesday, by the Coquitlam Chiefs C1 team in an exciting finish.  Tied after regulation time [with full 15 minute periods, rather than the normal Atom 12, with the third period shortened if anything in the game goes overtime], and still tied after five minutes of 4 on 4 sudden death overtime, the final was won in the second-to-last shot of a shoot-out.

Because of conflicting appointments, Grandpa (of #6, right wing forward) had to travel an hour and a half, taking the Skytrain and bus out to the rink, but is still chuffed  :-)

But the disappointments, of which I speak, had to do with computers.  Part of the pain of buying new stuff, is that things you thought you could rely on, well, it turns out you can’t, anymore.

One of them is NOD32.  Eset does make a good product, although it tends to be fairly greedy for cycles, while operating, and a bit arrogant in terms of what it tells you.  So, when a family member was in trouble over an infection (always embarrassing when your own family doesn’t take precautions, isn’t it?) I had no hesitation in applying NOD32 to try and clean it up.  Well, the machine is older, and slow.  And, hasn’t been updated in a while, so I was trying to fix that, too.  NOD32, even after finishing it’s scan, was interfering with the update process.  So much so, that it got to the point where we thought the machine was unrecoverable.

We did get it back in operation.  (And, first thing, removed NOD32.)  But it’s disappointing when a trusted tool bites you.

(Speaking of the which, I’ve spoken before about MSE, and even mentioned some of the performance degradation it can cause in older machines.  I must say, that, in some recent experiences, I’m more and more impressed with it as a means of rescuing computers that have been infected.)

More closely related to the new computers, one of my favourite places to get computer equipment, over the years, has been a western Canadian drugstore chain called London Drugs, similar (for those of you in the States) to Walgreens, although sometimes London Drugs is closer in scale to Target.  For twenty years I have been sending people to them for good advice, knowledgable staff, and decent prices.

Well, one of the computers I bought, this time around, is a Mac.  I’m not familiar with Macs, so I was relying on their advice.  Actually, the advice that I got from one staffer was quite good.  But, when I went to actually buy the machine, I got it home and found that what I had brought home was not what I’d ordered.  Which reminded me that the last time I needed to get a printer cartridge, again, they gave me the wrong one.  (There is also that fact that, in relying on their advice over what I needed, they sold me some completely unnecessary software, when the function I wanted is already built in to the Mac.)
Overall, I think they still are a reasonably decent place to get stuff, but, obviously, they may be victims of their own success.  Getting a bit careless, perhaps.  So, equally obviously, I can’t just rely on them any more, and will have to be careful about who I send there, as well.
Like I  said, a bit of a disappointment …

Share

REVIEW: “The Design of Rijndael”, Joan Daemen/Vincent Rijmen

BKDRJNDL.RVW   20091129

“The Design of Rijndael”, Joan Daemen/Vincent Rijmen, 2002, 3-540-42580-2
%A   Joan Daemen
%A   Vincent Rijmen
%C   233 Spring St., New York, NY   10013
%D   2002
%G   3-540-42580-2
%I   Springer-Verlag
%O   212-460-1500 800-777-4643 service-ny@springer-sbm.com
%O  http://www.amazon.com/exec/obidos/ASIN/3540425802/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/3540425802/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/3540425802/robsladesin03-20
%O   Audience s- Tech 3 Writing 1 (see revfaq.htm for explanation)
%P   238 p.
%T   “The Design of Rijndael: AES – The Advanced Encryption Standard”

This book, written by the authors of the Rijndael encryption algorithm, (the engine underlying the Advanced Encryption Standard) explains how Rijndael works, discusses some implementation factors, and presents the approach to its design.  Daemen and Rijmen note the linear and differential cryptanalytic attacks to which DES (the Data Encryption Standard) was subject, the design strategy that resulted from their analysis, the possibilities of reduce round attacks, and the details of related ciphers.

Chapter one is a history of the AES assessment and decision process.  It is interesting to note the requirements specified, particularly the fact that AES was intended to protect “sensitive but unclassified” material.  Background in regard to mathematical and block cipher concepts is given in chapter two.  The specifications of Rijndael sub-functions and rounds are detailed in chapter three.  Chapter four notes implementation considerations in small platforms and dedicated hardware.  The design philosophy underlying the work is outlined in chapter five: much of it concentrates on simplicity and symmetry.
Differential and linear cryptanalysis mounted against DES is examined in chapter six.  Chapter seven reviews the use of correlation matrices in cryptanalysis.  If differences between pairs of plaintext can be calculated as they propagate through the boolean functions used for intermediate and resultant ciphertext, then chapter eight shows how this can be used as the basis of differential cryptanalysis.  Using the concepts from these two chapters, chapter nine examines how the wide trail design diffuses cipher operations and data to prevent strong linear correlations or differential propagation.  There is also formal proof of Rijndael’s resistant construction.  Chapter ten looks at a number of cryptanalytic attacks and problems (including the infamous weak and semi-weak keys of DES) and notes the protections provided in the design of Rijndael.  Cryptographic algorithms that made a contribution to, or are descended from, Rijndael are described in chapter eleven.

This book is intended for serious students of cryptographic algorithm design: it is highly demanding text, and requires a background in the formal study of number theory and logic.  Given that, it does provide some fascinating examination of both the advanced cryptanalytic attacks, and the design of algorithms to resist them.

copyright Robert M. Slade, 2009    BKDRJNDL.RVW   20091129

Share

Metasploit 3.4.1 Released

Sunday 11th July saw the release of the latest version of the Metasploit Framework, and you can tell that the guys have been really busy over in Metasploit development land. Please see the release notes for this version below, and you can download the latest version from here.

Statistics

  • Metasploit now has 567 exploits and 283 auxiliary modules (up from 551 and 261 in v3.4)
  • Over 40 community reported bugs were fixed and numerous interfaces were improved

General

  • The Windows installer now ships with a working Postgres connector
  • New session notifications now always print a timestamp regardless of the TimestampOutput setting
  • Addition of the auxiliary/scanner/discovery/udp_probe module, which works through Meterpreter pivoting
  • HTTP client library is now more reliable when dealing with broken/embedded web servers
  • Improvements to the database import code, covering NeXpose, Nessus, Qualys, and Metasploit Express
  • The msfconsole “connect” command can now speak UDP (specify the -u flag)
  • Nearly all exploit modules now have a DisclosureDate field
  • HTTP fingerprinting routines added to some exploit modules
  • The psexec module can now run native x64 payloads on x64 based Windows systems
  • A development style guide has been added in the HACKING file in the SVN root
  • FTP authentication bruteforce modules added

Payloads

  • Some Meterpreter scripts (notably persistence and getgui) now create a resource file to undo the changes made to the target system.
  • Meterpreter scripts that create logs and download files now save their data in the ~.msf3/logs/scripts folder.
  • New Meterpreter Scripts:
  • enum_firefox – Enumerates Firefox data like history, bookmarks, form history, typed URLs, cookies and downloads databases.
  • arp_scanner – Script for performing ARP scan for a given CIDR.
  • enum_vmware – Enumerates VMware producst and their configuration.
  • enum_powershell – Enumerates powershell version, execution policy, profile and installed modules.
  • enum_putty – Enumerates recent and saved connections.
  • get_filezilla_creds – Enumerates recent and saved connections and extracts saved credentials.
  • enum_logged_on_users – Enumerate past users that logged in to the system and current connected users.
  • get_env – Extracts all user and system environment variables.
  • get_application_lits – Enumerates installed applications and their version.
  • autoroute – Sets a route from within a Meterpreter session without the need to background the sessions.
  • panda_2007_pavsrv53 – Panda 2007 privilege escalation exploit.
  • Support for a dns bypass list added to auxiliary/server/fakedns. It allows the user to specify which domains to resolve externally while returning forged records for everything else. Thanks to Rudy Ruiz for the patch.
  • Railgun – The Meterpreter “RAILGUN” extension by Patrick HVE has merged and is now available for scripts.
  • PHP Meterpreter – A protocol-compatible port of the original Meterpreter payload to PHP. This new payload adds the ability to pivot through webservers regardless of the native operating system
  • Token impersonation now works with “execute -t” to spawn new commands with a stolen token.

Known Issues

  • Interacting with a meterpreter session during a migration will break the session. See #1360.
  • There is no simple way to interrupt a background script started by AutoRunScript
  • Command interaction on Windows causes a PHP Meterpreter session to die. See #2232
Share

Nmap Scripting Engine (NSE)

A few days ago, I found myseld playing with the NSE again, and got to thinking about how many people actually know about NSE, and how to use it. This really is one of my favourite features that has been added to nmap over the years, and it really does make your life easier when doing a lot of scanning.

So, what is the NSE, I hear you ask? Well, instead of me trying to come up with a better way to explain, I’ve taken the following from the nmap online book, which can be found here.
“The Nmap Scripting Engine (NSE) is one of Nmap’s most powerful and flexible features. It allows users to write (and share) simple scripts to automate a wide variety of networking tasks. Those scripts are then executed in parallel with the speed and efficiency you expect from Nmap. Users can rely on the growing and diverse set of scripts distributed with Nmap, or write their own to meet custom needs.”

Some of the new scripts that were added recently were the following, and from the descriptions, you can see just how beneficial these are:

asn-query—Maps IP addresses to autonomous system (AS) numbers.
auth-spoof—Checks for an identd (auth) server which is spoofing its replies.
banner—A simple banner grabber which connects to an open TCP port and prints out anything sent by the listening service within five seconds.
dns-random-srcport—Checks a DNS server for the predictable-port recursion vulnerability. Predictable source ports can make a DNS server vulnerable to cache poisoning attacks (see CVE-2008-1447).
dns-random-txid—Checks a DNS server for the predictable-TXID DNS recursion vulnerability. Predictable TXID values can make a DNS server vulnerable to cache poisoning attacks (see CVE-2008-1447).
ftp-bounce—Checks to see if an FTP server allows port scanning using the FTP bounce method.
http-iis-webdav-vuln—Checks for a vulnerability in IIS 5.1/6.0 that allows arbitrary users to access secured WebDAV folders by searching for a password-protected folder and attempting to access it. This vulnerability was patched in Microsoft Security Bulletin MS09-020.
http-passwd—Checks if a web server is vulnerable to directory traversal by attempting to retrieve /etc/passwd using various traversal methods such as requesting ../../../../etc/passwd.
imap-capabilities—Retrieves IMAP email server capabilities.
mysql-info—Connects to a MySQL server and prints information such as the protocol and version numbers, thread ID, status, capabilities, and the password salt.
pop3-brute—Tries to log into a POP3 account by guessing usernames and passwords.
pop3-capabilities—Retrieves POP3 email server capabilities.
rpcinfo—Connects to portmapper and fetches a list of all registered programs.
snmp-brute—Attempts to find an SNMP community string by brute force guessing.
socks-open-proxy—Checks if an open socks proxy is running on the target.
upnp-info—Attempts to extract system information from the UPnP service.
whois—Queries the WHOIS services of Regional Internet Registries (RIR) and attempts to retrieve information about the IP Address Assignment which contains the Target IP Address.

All NSE scripts are written in the Lua Programming Language, for the NSE side of things, this languiage is easy enough to pick up, and come up with some decent scripts, and then share them with others. The more people that write these add-on scripts the better it is for everyone.

I hope that this was useful to someone, and if you’d like to see any other articles on tools, etc, then let me know via the comments and I’ll see what I can do to accomodate.

Share

REVIEW: “SSL and TLS: Theory and Practice”, Rolf Oppliger

BKSSLTTP.RVW   20091129

“SSL and TLS: Theory and Practice”, Rolf Oppliger, 2009, 978-1-59693-447-4
%A   Rolf Oppliger rolf.oppliger@esecurity.ch
%C   685 Canton St., Norwood, MA   02062
%D   2009
%G   978-1-59693-447-4 1-59693-447-6
%I   Artech House/Horizon
%O   617-769-9750 800-225-9977 artech@artech-house.com
%O   http://books.esecurity.ch/ssltls.html
%O  http://www.amazon.com/exec/obidos/ASIN/1596934476/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/1596934476/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1596934476/robsladesin03-20
%O   Audience i+ Tech 3 Writing 2 (see revfaq.htm for explanation)
%P   257 p.
%T   “SSL and TLS: Theory and Practice”

The preface states that the book is intended to update the existing literature on SSL (Secure Sockets Layer) and TLS (Transport Layer Security), and to provide a design level understanding of the protocols.  (Oppliger does not address issues of implementation or specific products.)  The work assumes a basic understanding of TCP/IP, the Internet standards process, and cryptography, altough some fundamental cryptographic principles are given.

Chapter one is a basic introduction to security and some related concepts.  The author uses the definition of security architecture from RFC 2828 to provide a useful starting point and analogy.  The five security services listed in ISO 7498-2 and X.800 (authentication, access control, confidentiality, integrity, and nonrepudiation) are clearly defined, and the resultant specific and pervasive security mechanisms are mentioned.  In chapter two, Oppliger gives a brief overview of a number of cryptologic terms and concepts, but some (such as steganography) may not be relevant to examination of the SSL and TLS protocols.  (There is also a slight conflict: in chapter one, a secure system is defined as one that is proof against a specific and defined threat, whereas, in chapter two, this is seen as conditional security.)  The author’s commentary is, as in all his works, clear and insightful, but the cryptographic theory provided does go well beyond what is required for this topic.

Chapter three, although entitled “Transport Layer Security,” is basically a history of both SSL and TLS.  SSL is examined in terms of the protocols, structures, and messages, in chapter four.  There is also a quick analysis of the structural strength of the specification.
Since TLS is derived from SSL, the material in chapter five concentrates on the differences between SSL 3.0 and TLS 1.0, and then looks at algorithmic options for TLS 1.1 and 1.2.  DTLS (Datagram Transport Layer Security), for UDP (User Datagram Protocol), is described briefly in chapter six, and seems to simply add sequence numbers to UDP, with some additional provision for security cookie exchanges.  Chapter seven notes the use of SSL for VPN (virtual private network) tunneling.  Chapter eight reviews some aspects of
public key certificates, but provides little background for full implementation of PKI (Public Key Infrastructure).  As a finishing touch, chapter nine notes the sidejacking attacks, concerns about man-in-the-middle (MITM) attacks (quite germane, at the moment), and notes that we should move from certificate based PKI to a trust and privilege management infrastructure (PMI).

In relatively few pages, Oppliger has provided background, introduction, and technical details of the SSL and TLS variants you are likely to encounter.  The material is clear, well structured, and easily accessible.  He has definitely enhanced the literature, not only of TLS, but also of security in general.

copyright Robert M. Slade, 2009    BKSSLTTP.RVW   20091129

Share

REVIEW: “Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif

BKCLSEPR.RVW   20091113

“Cloud Security and Privacy”, Tim Mather/Subra Kumaraswamy/Shahed Latif, 2009, 978-0-596-802769, U$34.99/C$43.99
%A   Tim Mather
%A   Subra Kumaraswamy
%A   Shahed Latif
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2009
%G   978-0-596-802769 0-596-802765
%I   O’Reilly & Associates, Inc.
%O   U$34.99/C$43.99 800-998-9938 707-829-0515 nuts@ora.com
%O  http://www.amazon.com/exec/obidos/ASIN/0596802765/robsladesinterne
http://www.amazon.co.uk/exec/obidos/ASIN/0596802765/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0596802765/robsladesin03-20
%O   Audience i- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   312 p.
%T   “Cloud Security and Privacy”

The preface tells how the authors met, and that they were interested in writing a book on clouds and security.  It provides no definition of cloud computing.  (It also emphasizes an interest in being “first to market” with a work on this topic.)

Chapter one is supposed to be an introduction.  It is very brief, and, yet again, doesn’t say what a cloud is.  (The authors aren’t very careful about building background information: the acronym SPI is widely used and important to the book, but is used before it is defined.  It stands for Saas/Paas/Iaas, or software-as-a-service, platform-as-a-service, and infrastructure-as-a-service.  More simply, this refers to applications, management/development utilities, and storage.)  A delineation of cloud computing is finally given in chapter two, stating that it is characterized by multitenancy, scalability, elasticity, pay-as-you-go options, and self-provisioning.  (As these aspects are expanded, it becomes clear that the scalability, elasticity, and self-provisioning characteristics the authors describe are essentially the same thing: the ability of the user or client to manage the increase or decrease in services used.)  The fact that the authors do not define the term “cloud” becomes important as the guide starts to examine security considerations.  Interoperability is listed as a benefit of the cloud, whereas one of the risks is identified as
vendor lock-in: these two factors are inherently mutually exclusive.

Chapter three talks about infrastructure security, but the advice seems to reduce to a recommendation to review the security of the individual components, including Saas, Paas, and network elements, which seems to ignore the emergent risks arising from any complex environment.  Encryption is said to be only a small part of data security in storage, as addressed in chapter four, but most of the material discusses encryption.  The deliberation on cryptography is superficial: the authors have managed to include the very recent research on homomorphic encryption, and note that the field will advance rapidly, but do not mention that homomorphic encryption is only useful for a very specific subset of data representations.  The identity management problem is outlined in chapter five, and protocols for managing new systems are reviewed, but the issue of integrating these protocols with existing systems is not.  “Security management in the Cloud,” as examined in chapter six, is a melange of general security management and operations management, with responsibility flipping back and forth between the customer and the provider.  Chapter seven provides a very good overview of privacy, but with almost no relation to the cloud as such.  Audit and compliance standards are described in chapter eight: only one is directed at the cloud.  Various cloud service providers (CSP) are listed in chapter
nine.  The terse description of security-as-a-service (confusingly also listed as Saas), in chapter ten, is almost entirely restricted to spam and Web filtering.  The impact of the use of cloud technology is dealt with in chapter eleven.  It lists the pros and cons, but again,
some of the points are presented without noting that they are mutually exclusive.  Chapter twelve finishes off the book with a precis of the foregoing chapters.

The authors do raise a wide variety of the security problems and concerns related to cloud computing.  However, since these are the same issues that need to be examined in any information security scenario it is hard to say that any cloud-specific topics are addressed.  Stripped of excessive verbiage, the advice seems to reduce to a) know what you want, b) don’t make assumptions about what the provider provides, and c) audit the provider.

copyright Robert M. Slade, 2009    BKCLSEPR.RVW   20091113

Share

AMTSO Inside and Outside

God bless Twitter.

A day or two ago, I was edified by the sight of two journalists asking each other whether AMTSO (the Anti-Malware Testing Standards Organization) had actually achieved anything yet. Though one of them did suggest that the other might ask me. (Didn’t happen.)

Well, it’s always a privilege to see cutting edge investigative journalism in action. I know the word researcher is in my job title, but I normally charge for doing other people’s research. But since you’re obviously both very busy, and as a member of the AMTSO Board of Directors (NB, that’s a volunteer role) I guess I do have some insight here, so let me help you out, guys.

Since the first formal meeting of AMTSO in May 2008, where a whole bunch of testers, vendors, publishers and individuals sat down to discuss how the general standard of testing could be raised, the organization has approved and published a number of guidelines/best practices documents.

To be more specific:

The “Fundamental Principles of Testing” document is a decent attempt at doing what it says on the tin, and provide a baseline definition for what good testing is at an abstract level.

The Guidelines document provide… errrr, guidelines… in a number of areas:

  • Dynamic Testing
  • Sample Validation
  • In the Cloud Testing
  • Network Based Product Testing
  • Whole Product Testing
  • Performance Testing

Another document looks at the pros and cons of creating malware for testing purposes.

The analysis of reviews document provides a basis for the review analysis process which has so far resulted in two review analyses – well, that was a fairly painful gestation process, and in fact, there was a volatile but necessary period in the first year in particular while various procedures, legal requirements and so on were addressed. There are several other papers in process being worked on

A fairly comprehensive links/files repository for testing-related resources was established here and new resources added, from AMTSO members and others.

Unspectacular, and no doubt journalistically uninteresting. But representing a lot of volunteer work by people who already have full time jobs.

You don’t have to agree with every sentence of every document: the point is that these documents didn’t exist before, and they go some way towards meeting the needs of those people who want to know more about testing, whether as a tester, tester’s audience, producer of products under test, or any other interested party. Perhaps most importantly, the idea has started to spread that perhaps testers should be accountable to their customers (those who read their reviews) for the accuracy and fitness for purpose of their tests, just as security vendors are accountable to their own customers.

[Perhaps I’d better clarify that: I’m not saying that tests have to be or can be perfect, any more than products . (You might want 100% security or 100% accuracy, but that isn’t possible.)

You don’t have to like what AMTSO does. But it would be nice if you’d actually make an effort to find out what we do and maybe even consider joining (AMTSO does not only admit vendors and testers) before you moan into extinction an organization that is trying to do something about a serious problem that no-one else is addressing.

David Harley CITP FBCS CISSP
Not speaking for AMTSO

Share

Why Is Paid Responsible Disclosure So Damn Difficult?

So I’ve been sitting on an Apple vulnerability for over a month now, and I’m really starting to realise that maybe just sending the details to the Full-Disclosure mailing list and Exploit-DB.com is the right way to go about disclosing vulnerabilities and exploits.

I initially contacted ZDI to see if they would be at all interested in buying the exploit off of me, as I spent a lot of time researching and finding this one, and I’d like to get something for my efforts. I am a firm believer in the No More Free Bugs movement, I understand and appreciate what ZDI are doing, but the fact that it took them just under a month to get back to me, is really not good enough to be very honest. If they don’t have the researchers, then advertise worldwide, instead of just US only. I know I for one, would be happy validating bugs all day, and this is the the type of work that can be remotely.
Yesterday I also submitted the same information to iDefense Labs Vulnerability Contributor Program (VCP), who claim to get back to me within 48 hours, so we’ll see how that goes. I will update this post as and I when I know more.

I also took the off chance of mailing Apple directly, and asking if they offer any rewards for vulnerabilities that have been found, and if so what they would be. I don’t have high hopes on Apple offering anything, but to be honest, I would prefer to  disclose this one directly to Apple. They however  have paid staff to do this work on a full time basis on all their products, so why aren’t they doing it properly, and I feel that anyone else finding bugs for them, should be compensated appropriately. However, I e-mailed them yesterday and recieved an automated response, so we see how long it takes them to respond to me as well.

This may end up being a rather long post, but let’s see. I’m also expecting to see quite a few interesting comments on this post as well, so come on people.

UPDATE 30/06/2010:

Received a response from iDefense last night,and a request for more info. So just over 24 hour response time, which is brilliant, I’m really impressed so far.

Recieved a response from Apple, and if I would like any reward (aside from credit for the find), then I was informed that I should go through ZDI or iDefense.

Share

Backtrack – The Future, The Funding, The Roadmap

Great news, Backtrack now has funding to move ahead with scheduled releases, and a roadmap moving forward up to Backtrack 5. You can view the roadmap here. It seems that the worlds leader in penetration testing training, namely Offensive Security is going to be funding the BackTrack Linux distribution’s development going forward. No need to worry though, BackTrack is still going to remain an Open Source distro.

Other news on this front is that the Exploit Database now has new EDB Research and Development teams that are actively working on vulnerability discovery and development, so watch this space for more news and good things to come. It’s also very worthwhile checking out the Exploit Database Blog.

Share

Hack In The Box Security Conference Comes to Europe

The first ever HITB Security conference will be help in Amsterdam on the 1st and 2nd July, so apologies for only posting this now, but there’s still time to register.

The full conference agenda can be found here.

Some of the talks listed are:

- Breaking Virtualization by Switching to Virtual 8086 Mode

- Attacking SAP Users Using sapsploit

- Fireshark – A tool to Link the Malicious Web

- Having Fun with Apple’s IOKit

So all in all, it looks like it’s going to be an interesting couple of days.

Leave a comment if you’re going, it’d be good to hook up.

Share

National Strategy for Trusted Identities in Cyberspace

There is no possible way this could potentially go wrong, right?

Doesn’t the phrase “Identity Ecosystem” make you feel all warm and “green”?

It’s a public/private partnership, right?  So there is no possibility of some large corporation taking over the process and imposing *their* management ideas on it?  Like, say, trying to re-introduce the TCPI?

And there couldn’t possibly be any problem that an identity management system is being run out of the US, which has no privacy legislation?

The fact that any PKI has to be complete, and locked down, couldn’t affect the outcome, could it?

There isn’t any possible need for anyone (who wasn’t a vile criminal) to be anonymous, is there?

Share

Your Chance To Get The Tools You Want Added To The Next Backtrack Release (BT4r1)

If there are any tools that you currently use that aren’t already in the Backtrack 4 Linux distribution, then now is your chance to get them added to the next Backtrack release.

The guys over at Offensive Security have set up a page where you can submit your requests. I urge everyone to make use of this if there is anything that you think the Backtrack community could benefit from, and make your lives easier.

The link to submit requests can be found here.

Share