Vanishingly small utility …

This system has had some discussion in the forensics world over the past few days.  Here’s an extract from Science Daily:

“Computers have made it virtually impossible to leave the past behind. College Facebook posts or pictures can resurface during a job interview. A lost cell phone can expose personal photos or text messages. A legal investigation can subpoena the entire contents of a home or work computer. The University of Washington has developed a way to make such information expire. After a set time period, electronic communications such as e-mail, Facebook posts and chat messages would automatically self-destruct, becoming irretrievable from all Web sites, inboxes, outboxes, backup sites and home computers. Not even the sender could retrieve them.

“The team of UW computer scientists developed a prototype system called Vanish that can place a time limit on text uploaded to any Web service through a Web browser.

[Perhaps a bit narrower focus than the original promise, but it is a prototype - rms]

“After a set time text written using Vanish will, in essence, self-destruct.  The Vanish prototype washes away data using the natural turnover, called “churn,” on large file-sharing systems known as peer-to-peer networks. For each message that it sends, Vanish creates a secret key, which it never reveals to the user, and then encrypts the message with that key. It then divides the key into dozens of pieces and sprinkles those pieces on random computers that belong to worldwide file-sharing networks. The file-sharing system constantly changes as computers join or leave the network, meaning that over time parts of the key become permanently inaccessible. Once enough key parts are lost, the original message can no longer be deciphered.”

However, given the promise to clean up social networking sites, and as I started to read the paper, an immediate problem occurred to me.  And, lo and hehold, the authors admit it:

“We therefore focus our threat model and subsequent analyses on attackers who wish to compromise data privacy. Two key properties of our threat model are:
1. Trusted data owners. Users with legitimate access to the same VDOs trust each other.
2. Retroactive attacks on privacy. Attackers do not know which VDOs they wish to access until after the VDOs expire.
The former aspect of the threat model is straightforward, and in fact is a shared assumption with traditional encryption schemes: it would be impossible for our system to protect against a user who chooses to leak or permanently preserve the cleartext contents of a VDO-encapsulated file through out-of-band means. For example, if Ann sends Carla a VDO-encapsulated email, Ann must trust Carla not to print and store a hard-copy of the email in cleartext.”

So, this system works perfectly.  If you only communicate with people you trust (both in terms of intent, and competence), and who only use the system properly, and never use any of the information in any program that is not part of the system, it’s completely secure.

How often have we heard that said?

The default to privacy aspect is interesting, and the automatic transparency for the user as well, but this simply moves the problem one step back, as it were.  In terms of utility to social networking, the social networks would have to be completely rewritten to adher to the system, and even then it would be pretty much impossible to ensure that nobody would have the ability to scrape data and keep or publish it elsewhere.

(Plus, the data is still there, and so is Moore’s Law …)

Share
  • lyalc

    I don’t see how the basic premise works as described above helps at all.
    Capture enough key components from the same public peer to peer networks, and you have the keys to all ‘secured’ data.
    The attack becomes one of having enough storage for all the keys – maybe a few terabytes, a few hundred dollars at today’s prices.

    lyalc