Windows 2012 R2 Certification Authority installation guide

This step-by-step guide explains how to install and configure public key infrastructure, based on:

  • Windows 2012 R2 Server core – offline Root CA
  • Windows 2012 R2 domain controller
  • Windows 2012 R2 standard edition – Subordinate Enterprise CA server

Offline Root CA – OS installation phase

  1. Boot the server using Windows 2012 R2 bootable DVD.
  2. From the installation option, choose “Windows Server 2012 R2 Standard (Server Core Installation)” -> click Next.
  3. Accept the license agreement -> click Next.
  4. Choose “Custom: Install Windows Only (Advanced)” installation type -> specify the hard drive to install the operating system -> click Next.
  5. Allow the installation phase to continue and restart the server automatically.
  6. To login to the server for the first time, press CTRL+ALT+DELETE
  7. Choose “Administrator” account -> click OK to replace the account password -> specify complex password and confirm it -> press Enter -> Press OK.
  8. From the command prompt window, run the command bellow:
    sconfig.cmd
  9. Press “2″ to replace the computer name -> specify new computer name -> click “Yes” to restart the server.
  10. To login to the server, press CTRL+ALT+DELETE -> specify the “Administrator” account credentials.
  11. From the command prompt window, run the command bellow:
    sconfig.cmd
  12. Press “5″ to configure “Windows Update Settings” -> select “A” for automatic -> click OK.
  13. Press “6″ to download and install Windows Updates -> choose “A” to search for all updates -> Choose “A” to download and install all updates -> click “Yes” to restart the server.
  14. To login to the server, press CTRL+ALT+DELETE -> specify the “Administrator” account credentials.
  15. From the command prompt window, run the command bellow:
    sconfig.cmd
  16. In-case you need to use RDP to access and manage the server, press “7″ to enable “Remote Desktop” -> choose “E” to enable -> choose either “1″ or “2″ according to your client settings -> Press OK.
  17. Press “8″ to configure “Network settings” -> select the network adapter by its Index number -> press “1″ to configure the IP settings -> choose “S” for static IP address -> specify the IP address, subnet mask and default gateway -> press “2″ to configure the DNS servers -> click OK -> press “4″ to return to the main menu.
  18. Press “9″ to configure “Date and Time” -> choose the correct “date/time” and “time zone” -> click OK
  19. Press “11″ to restart the server to make sure all settings take effect -> click “Yes” to restart the server.
  20. 20. To login to the server, press CTRL+ALT+DELETE -> specify the “Administrator” account credentials.
  21. From the command prompt window, run the command bellow:
    powershell
  22. Run the commands bellow to enable remote management of the Root CA:
    Enable-NetFirewallRule -DisplayGroup "Remote Service Management"
    Note: The above command should be written in single line.
    Enable-NetFirewallRule -DisplayGroup "Remote Desktop"

Offline Root CA – Certificate Authority server installation phase

  1. To login to the server, press CTRL+ALT+DELETE -> specify the “Administrator” account credentials.
  2. From the command prompt window, run the command bellow:
    powershell
  3. Run the command below to create CA policy file:
    notepad c:\windows\capolicy.inf
  4. Specify the following data inside the capolicy.inf file:
    [Version]
    Signature="$Windows NT$"
    [Certsrv_Server]
    RenewalKeyLength=4096
    RenewalValidityPeriod=Years
    RenewalValidityPeriodUnits=20
    CRLPeriod=Weeks
    CRLPeriodUnits=26
    CRLDeltaPeriod=Days
    CRLDeltaPeriodUnits=0
    LoadDefaultTemplates=0
    AlternateSignatureAlgorithm=1
    [PolicyStatementExtension]
    Policies=LegalPolicy
    [LegalPolicy]
    OID=1.2.3.4.1455.67.89.5
    Notice="Legal Policy Statement"
    URL=http://www/CertEnroll/cps.asp
  5. Run the commands below to install Certification Authority using Powershell:
    Import-Module ServerManagerAdd-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
    Note: The above command should be written in single line.
  6. Run the command below to install the Root CA:
    Install-AdcsCertificationAuthority -CAType StandaloneRootCA -KeyLength 4096 -HashAlgorithmName SHA256 -ValidityPeriod Years -ValidityPeriodUnits 20 -CACommonName <CA_Server_Name> -CryptoProviderName "RSA#Microsoft Software Key Storage Provider"
    Note 1: The above command should be written in single line.
    Note 2: Replace “CA_Server_Name” with the Root CA NetBIOS name.
  7. Run the command below to remove all default CRL Distribution Point (CDP):
    $crllist = Get-CACrlDistributionPoint; foreach ($crl in $crllist) {Remove-CACrlDistributionPoint $crl.uri -Force};
    Note: The above command should be written in single line.
  8. Run the commands below to configure new CRL Distribution Point (CDP):
    Add-CACRLDistributionPoint -Uri C:\Windows\System32\CertSrv\CertEnroll\%3%8.crl -PublishToServer -Force
    Note: The above command should be written in single line.
    Add-CACRLDistributionPoint -Uri http://www/CertEnroll/%3%8.crl -AddToCertificateCDP -Force
    Note: The above command should be written in single line.
  9. Run the command below to remove all default Authority Information Access (AIA):
    $aialist = Get-CAAuthorityInformationAccess; foreach ($aia in $aialist) {Remove-CAAuthorityInformationAccess $aia.uri -Force};Note: The above command should be written in single line.
  10. Run the command below to configure new Authority Information Access (AIA):
    Add-CAAuthorityInformationAccess -AddToCertificateAia -uri http://www/CertEnroll/%1_%3.crt
    Note: The above command should be written in single line.
  11. Run the commands below to configure the Root CA settings:
    certutil.exe -setreg CA\CRLPeriodUnits 26
    certutil.exe -setreg CA\CRLPeriod "Weeks"
    certutil.exe -setreg CA\CRLDeltaPeriodUnits 0
    certutil.exe -setreg CA\CRLDeltaPeriod "Days"
    certutil.exe -setreg CA\CRLOverlapPeriodUnits 12
    certutil.exe -setreg CA\CRLOverlapPeriod "Hours"
    certutil.exe -setreg CA\ValidityPeriodUnits 20
    certutil.exe -setreg CA\ValidityPeriod "Years"
    certutil.exe -setreg CA\KeySize 4096
    certutil.exe -setreg CA\AuditFilter 127
  12. Run the commands bellow from command line, to configure the Offline Root CA to publish in the active-directory:
    certutil.exe -setreg ca\DSConfigDN "CN=Configuration, DC=mycompany,DC=com"
    Note 1: The above command should be written in single line.
    Note 2: Replace “DC=mycompany,DC=com” according to your domain name.
    certutil.exe -setreg ca\DSDomainDN "DC=mycompany,DC=com"
    Note: Replace “DC=mycompany,DC=com” according to your domain name.
  13. Run the command bellow to stop the CertSvc service:
    Restart-Service certsvc
  14. Run the command below to publish new CRL’s:
    certutil.exe -CRL

Enterprise Subordinate CA – OS installation phase
Pre-requirements:

  • Active Directory (Forest functional level – Windows 2012 R2)
  • Add “A” record for the Root CA to the Active Directory DNS.
  1. Boot the server using Windows 2012 R2 bootable DVD.
  2. From the installation option, choose “Windows Server 2012 R2 Standard (Server with a GUI)” -> click Next.
  3. Accept the license agreement -> click Next.
  4. Choose “Custom: Install Windows Only (Advanced)” installation type -> specify the hard drive to install the operating system -> click Next.
  5. Allow the installation phase to continue and restart the server automatically.
  6. To login to the server for the first time, press CTRL+ALT+DELETE
  7. Choose “Administrator” account -> click OK to replace the account password -> specify complex password and confirm it -> press Enter -> Press OK.
  8. From the “Welcome to Server Manager”, click on “Configure this local server” -> replace the “Computer name” -> restart the server.
  9. From the “Welcome to Server Manager”, click on “Configure this local server” -> click on Ethernet -> right click on the network interface -> properties -> configure static IP address.
  10. Enable “Remote Desktop”
  11. From the command prompt window, run the command bellow:
    powershell
  12. Run the commands bellow to enable remote management of the Root CA:
    Enable-NetFirewallRule -DisplayGroup "Remote Desktop"

Enterprise Subordinate CA – Certificate Authority server installation phase
Pre-requirements:

  • DNS CNAME record named “www” for the Enterprise Subordinate CA.
  • Make sure the clocks of the Offline Root CA and the Subordinate CA are synched.
  1. To login to the server, press CTRL+ALT+DELETE -> specify the credentials of account member of “Schema Admins”, “Enterprise Admins” and “Domain Admins”.
  2. Copy the files bellow from the Offline Root CA server to a temporary folder on the subordinate CA:
    C:\Windows\System32\CertSrv\CertEnroll\*.crt
    C:\Windows\System32\CertSrv\CertEnroll\*.crl
  3. Run the command below to publish the Root CA in the Active Directory:
    certutil.exe -dspublish -f "<CACertFileName.crt>" RootCA
    Note: Replace “CACertFileName” with the actual CRT file.
  4. Run the commands below to add the Root CA certificate to the subordinate CA certificate store:
    certutil.exe -addstore -f root "<CACertFileName.crt>"
    certutil.exe -addstore -f root "<CACertFileName.crl>"

    Note: Replace “CACertFileName” with the actual CRT and CRL files.
  5. From the command prompt window, run the command bellow:
    powershell
  6. Run the command below to create CA policy file:
    notepad c:\windows\capolicy.inf
  7. Specify the following data inside the capolicy.inf file:
    [Version]
    Signature="$Windows NT$"
    [Certsrv_Server]
    RenewalKeyLength=2048
    RenewalValidityPeriod=Years
    RenewalValidityPeriodUnits=5
    LoadDefaultTemplates=0
    AlternateSignatureAlgorithm=1
  8. Run the commands below to install Certification Authority using Powershell:
    Import-Module ServerManagerAdd-WindowsFeature ADCS-Cert-Authority -IncludeManagementTools
    Note: The above command should be written in single line.
    Add-WindowsFeature Web-Mgmt-Console
    Add-WindowsFeature Adcs-Web-Enrollment
  9. Open Server Manager -> From the “Welcome to Server Manager”, click on notification icon -> click on “Configure Active Directory Certificate Services on the destination server”
  10. Specify credentials and click on Next.
  11. Select both “Certification Authority” and “Certification Authority Web Enrollment” roles and click on Next.
  12. Select “Enterprise CA” -> click on Next.
  13. Select “Subordinate CA” -> click on Next.
  14. Select “Create a new private key” -> click on Next.
  15. Cryptography:
    Cryptographic service provider (CSP): RSA#Microsoft software Key Storage Provider
    Key length: 2048
    Hash algorithm: SHA256
  16. CA Name:
    Common name: specify here the subordinate server NetBIOS name
    Distinguished name suffix: leave the default domain settings
  17. Select “Save a certificate request to file on the target machine” -> click Next
  18. Specify the database location and click Next.
  19. Click on Configure -> wait until the process completes and click on Close.
    Note: If asked, choose not to configure additional role services.
  20. Copy the request file (*.req) to the Offline Root CA.
  21. Login to the Offline Root CA using administrative account.
  22. Run the command below to approve the subordinate CA certificate request:
    certreq -submit "<CACertFileName>.req"
    Note: Replace “CACertFileName” with the actual request file.
  23. Run the command below to approve the subordinate CA request:
    certutil -resubmit 2
    Note: Replace “2″ with the request ID.
  24. Run the command below to command to download the new certificate.
    certreq -retrieve 2 "C:\<CACertFileName>.cer"
    Note 1: Replace “CACertFileName” with the actual CER file.
    Note 2: Replace “2″ with the request ID.
  25. Logoff the Root CA and power it off for up to 179 days (for CRL update).
  26. Return to the Subordinate CA.
  27. Copy the file “c:\<CACertFileName>.cer” from the Offline Root CA to the Subordinate CA.
    Note: Replace “CACertFileName” with the actual CER file.
  28. Run the commands below to complete the Subordinate CA installation process:
    powershell
    Certutil -installcert "<CACertFileName>.cer"

    Note: Replace “CACertFileName” with the actual CER file.
  29. Run the command below to restart the CA service:
    start-service certsvc
  30. Run the command below to remove all default CRL Distribution Point (CDP):
    $crllist = Get-CACrlDistributionPoint; foreach ($crl in $crllist) {Remove-CACrlDistributionPoint $crl.uri -Force};
    Note: The above command should be written in single line.
  31. Run the commands below to configure new CRL Distribution Point (CDP):
    Add-CACRLDistributionPoint -Uri C:\Windows\System32\CertSrv\CertEnroll\%3%8%9.crl -PublishToServer -PublishDeltaToServer -Force
    Note: The above command should be written in single line.
    Add-CACRLDistributionPoint -Uri http://www/CertEnroll/%3%8%9.crl -AddToCertificateCDP -Force
    Note: The above command should be written in single line.
    Add-CACRLDistributionPoint -Uri file://\\<SubordinateCA_DNS_Name>\CertEnroll\%3%8%9.crl -PublishToServer -PublishDeltaToServer -Force
    Note 1: The above command should be written in single line.
    Note 2: Replace “<SubordinateCA_DNS_Name>” with the actual Subordinate CA DNS name.
  32. Run the command below to remove all default Authority Information Access (AIA):
    $aialist = Get-CAAuthorityInformationAccess; foreach ($aia in $aialist) {Remove-CAAuthorityInformationAccess $aia.uri -Force};
    Note: The above command should be written in single line.
  33. Run the commands below to configure new Authority Information Access (AIA):
    Add-CAAuthorityInformationAccess -AddToCertificateAia http://www/CertEnroll/%1_%3%4.crt -Force
    Note: The above command should be written in single line.
    Add-CAAuthorityInformationAccess -AddToCertificateAia "ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11"Note: The above command should be written in single line.
    Add-CAAuthorityInformationAccess -AddToCertificateOcsp http://www/ocsp -Force
    Note: The above command should be written in single line.
  34. Run the commands below to configure the Root CA settings:
    Certutil -setreg CA\CRLPeriodUnits 2
    Certutil -setreg CA\CRLPeriod "Weeks"
    Certutil -setreg CA\CRLDeltaPeriodUnits 1
    Certutil -setreg CA\CRLDeltaPeriod "Days"
    Certutil -setreg CA\CRLOverlapPeriodUnits 12
    Certutil -setreg CA\CRLOverlapPeriod "Hours"
    Certutil -setreg CA\ValidityPeriodUnits 5
    Certutil -setreg CA\ValidityPeriod "Years"
    certutil -setreg CA\AuditFilter 127
    certutil -setreg CA\EncryptionCSP\CNGEncryptionAlgorithm AES
    certutil -setreg CA\EncryptionCSP\SymmetricKeySize 256
    certutil -setreg CA\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE
    certutil -setreg policy\EditFlags +EDITF_ATTRIBUTESUBJECTALTNAME2
    Note: The above command should be written in single line.
  35. Run the command bellow to stop the CertSvc service:
    Restart-Service certsvc
  36. Run the command below to public new CRL’s:
    certutil.exe -CRL
  37. Copy the files bellow from the Root CA to the subordinate CA (same location):
    C:\Windows\System32\CertSrv\CertEnroll\*.crl
    C:\Windows\System32\CertSrv\CertEnroll\*.crt
  38. Create CPS (Certificate Practice Statement), save it as “cps.asp” inside the subordinate CA under the folder below:
    C:\Windows\System32\CertSrv\CertEnroll
    Note: For more information about Certificate Practice Statement, see:
    http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
  39. Login to a domain controller in the forest root domain, with account member of Domain Admins and Enterprise Admins.
  40. Open Server Manager -> Tools -> Active Directory Users and Computers.
  41. From the left pane, expand the domain name -> choose an OU and create the following groups:
    Group name: CA Admins
    Group description/purpose: Manage CA server
    Group name:
    CA Issuers
    Group description/purpose: Issue certificates
  42. Logoff the domain controller.
  43. Login to the Subordinate CA using administrative account, who is also member of the “CA Admins” group.
  44. Open Server Manager -> Tools -> Certification Authority.
  45. From the left pane, right click on the CA server name -> Properties -> Security tab -> Add -> add the “CA Admins” group -> grant the permissions “Issue and Manage Certificates” and “Manage CA” and remove all other permissions -> click on OK.
    Note: As best practices, it is recommended to remove the default permissions of “Domain Admins” and “Enterprise Admins”.
  46. From the left pane, expand the CA server name -> right click on Certificate Templates -> Manage -> from the main pane, right click on “User” certificate -> Duplicate Template -> General tab -> rename the template to “Custom User Certificate” -> Security tab -> click on Add -> add the “CA Issuers” group -> grant the permission “Read”, “Enroll” and “Autoenroll” -> click on OK.
  47. From the main pane, right click on “Web Server” certificate -> Duplicate Template -> General tab -> rename the template to “Custom Web Server Certificate” -> Request Handling tab -> select “Allow private key to be exported” -> Security tab -> click on Add -> add the “CA Issuers” group -> grant the permission “Read” and “Enroll” -> remove the permissions for the built-in Administrator account -> click on OK.
    Note: All computer accounts requesting the “Custom Web Server Certificate” certificate must be member of the “CA Issuers” group.
  48. From the main pane, right click on “OCSP Response Signing” certificate -> Duplicate Template -> General tab -> rename the template to “Custom OCSP Response Signing” -> Security tab -> add the subordinate CA computer account -> grant “Read”, “Enroll” and “Autoenroll” -> click OK.
  49. From the main pane, right click on “Web Server” certificate -> Properties -> Security tab -> click on Add -> add the “CA Issuers” group -> grant the permission “Read” and “Enroll” -> click OK
  50. Close the Certificate Templates Console.
  51. From the Certification Authority console left pane, right click on Certificate Templates -> New -> Certificate Template to issue -> select the following certificate templates:
    Web Server
    Custom User Certificate
    Custom Web Server Certificate
    Custom OCSP Response Signing
  52. Click OK.
  53. Close the Certification Authority console.
  54. Open Server Manager -> Manage -> Add Roles and Features -> click Next 3 times -> expand “Active Directory Certificate Services” -> select “Online Responder” -> click on Add Features -> click Next twice -> click on Install -> click on Close
  55. From the upper pane, click on notification icon -> click on “Configure Active Directory Certificate Services on the destination server”
  56. Specify credentials and click on Next.
  57. Select “Online Responder” -> click Next -> click on Configure -> click Close.
  58. From the left pane, right click on “Online Responder” -> Responder Properties -> Audit tab -> select “Changes to the Online Responder configuration”, “Changes to the Online Responder security settings” and “Requests submitted to the Online Responder” -> click OK -> close the “Online Responder Configuration” console.
  59. Open Server Manager -> Tools -> Local Security Policy -> from the left pane, expand “Advanced Audit Policies” -> expand “System Audit Policies – Local Group Policy Object” -> click on Object Access -> from the main pane, double click on “Audit Certification Services” -> select “Configure the following audit events” -> select both Success and Failure -> click OK -> close the Local Security policy console.
  60. Run from command line:
    certutil -CRL
  61. Run from command line:
    certutil -v -setreg policy\editflags +EDITF_ENABLEOCSPREVNOCHECK
    Note: The above command should be written in single line.
  62. Run the commands bellow to stop the CertSvc service:
    powershell
    Restart-Service certsvc
  63. Open Server Manager -> Tools -> Online Responder Management
  64. From the left pane, right click on “Revocation Configuration” -> Add revocation configuration -> click Next -> on the name field, specify “Custom Revocation Configuration” -> click Next -> select “Select a certificate for an Existing enterprise CA” -> click Next -> click Browse -> select the subordinate CA -> click OK -> Automatically select a signing certificate -> click Next -> click Finish
  65. Close the Online Responder Management console
  66. Login to a domain controller in the forest root domain, with account member of Domain Admins and Enterprise Admins.
  67. Copy the files bellow from the subordinate CA server to a temporary folder on the domain controller:
    C:\Windows\System32\CertSrv\CertEnroll\*.crt
    Note: Copy the newest files
  68. Open Server Manager -> Tools -> Group Policy Management.
  69. From the left pane, expand the forest name -> expand Domains -> expand the relevant domain name -> right click on “Default domain policy” -> Edit.
  70. From the left pane, under “Computer Configuration” -> expand Policies -> expand “Windows Settings” -> expand “Security Settings” -> expand “Public Key Policies” -> right click on “Trusted Root Certification Authorities” -> Import -> click Next -> click Browse to locate the CRT file from the Root CA server -> click Open -> click Next twice -> click Finish -> click OK.
  71. From the left pane, under “Computer Configuration” -> expand Policies -> expand “Windows Settings” -> expand “Security Settings” -> expand “Public Key Policies” -> right click on “Intermediate Certification Authorities” -> Import -> click Next -> click Browse to locate the CRT file from the Subordinate CA server -> click Open -> click Next twice -> click Finish -> click OK.
  72. From the main pane, right click on the certificate name -> Properties -> OCSP tab -> inside the empty “Add URL” field, specify:
    http://www/ocsp
    Click on Add URL -> Click OK.
  73. From the left pane, under “Computer Configuration” -> expand Policies -> expand “Windows Settings” -> expand “Security Settings” -> click on “Public Key Policies” -> from the main pane, right click on “Certificate Services Client – Certificate Enrollment Policy” -> Properties -> change the “Configuration Model” to “Enabled” and click OK.
  74. From the left pane, under “Computer Configuration” -> expand Policies -> expand “Windows Settings” -> expand “Security Settings” -> click on “Public Key Policies” -> from the main pane, right click on “Certificate Services Client – Auto-Enrollment” -> Properties -> change the “Configuration Model” to “Enabled” -> select “Renew expired certificates, update pending certificates, and remove revoked certificates” and “Update certificates that use certificate templates” -> click OK.
  75. From the left pane, under “Computer Configuration” -> expand Policies -> expand “Administrative Templates” -> expand “Windows Components” -> expand “Internet Explorer” -> expand “Internet Control Panel” -> expand “Security Page” -> double click on “Site to zone assignment list” -> click on “Enabled” -> under Options, click on “Show” -> inside “Value name”, specify the Subordinate CA DNS name -> inside “Value”, specify 2 -> click OK twice.
  76. Close the “Group Policy Management”.
  77. Logoff the domain controller.
  78. Login to the Subordinate CA using administrative account.
  79. Open Server Manager -> Tools -> Internet Information Services (IIS) Manager.
  80. From the left pane, expand the server name -> expand Sites -> click on “Default Web Site” -> from the right pane, click on “Bindings” -> click on Add -> from the Type, select HTTPS -> under “SSL Certificate”, select the Subordinate CA certificate -> click OK -> click on Close.
  81. From the left pane, expand “Default Web Site” -> click on “CertSrv” -> from the main pane, double click on “Request Filtering” -> click Edit Feature Settings -> select “Allow Double Escaping” -> click OK
  82. From the main pane, double click on “SSL Settings” -> select “Require SSL” -> click on Apply.
  83. Close the Internet Information Services (IIS) Manager console.
  84. Run PKIVIEW.msc to make sure the entire PKI structure is fully functional.
  85. Logoff the Subordinate CA.

 

The original article can be found at:

http://security-24-7.com/windows-2012-r2-certification-authority-installation-guide/

Share

AV is dead … again …

Antivirus software only catches 45% of malware attacks and is “dead”, according to a senior manager at Symantec.”

85.4% of statistic can be interpreted in the opposite way, and AV has been declared dead regularly since 1987.

Symantec “invented commercial antivirus software in the 1980s”?  That must come as news to the many companies, like Sophos, that I was reviewing long before Symantec bought out their first AV company.

“Dye told the Wall Street Journal that hackers increasingly use novel methods and bugs in the software of computers to perform attacks.”

There were “novel attacks” in 1986, and they got caught.  There have been novel attacks every year or so since, and they’ve been caught.  At the same time, lots of people get attacked and fail to detect it.  There’s never a horse that couldn’t be rode, and there’s never a rider that couldn’t be throwed.

“Malware has become increasingly complex in a post-Stuxnet world.”

So have computers.  Even before Stuxnet.  I think it was Grace Hopper who said that the reason it is difficult to secure complex systems is because they are complex systems.  (And she died a while back.)

Share

Settle for nothing now … Settle for nothing later!

We settle for this. We, the consumers, are the problem! I don’t have much more to say…a picture is worth a thousand words.

bug bonanza

Share

Big Government vs Big Corp – which is worse?

A programmer has been banned from Google for life.

This appears to be kind of like those Kafka-esque errors that big government sometimes make [1] (and which reinforce the arguments against the “if you’re not doing anything wrong you don’t need privacy” position), with the added factor that there is absolutely nothing that can be done about it.

I suppose an individual programmer could bring civil suit against Google (and its undoubtedly huge population of lawyers) citing material damages for being forbidden from participating in the Google/Play/app store, but I wouldn’t be too sanguine about his chances of succeeding …

 

[1] – since the foreign workers program seems to be being used primarily to bring in workers for the oil and gas sector right now, do you think it would help if she offered to mount a production of “Grease”?

Share

Disasters in BC

The auditor general has weighed in, and, surprise, surprise, we are not ready for an earthquake.

On the one hand, I’m not entirely sure that the auditor general completely understands disaster planning, and she hasn’t read Kenneth Myers and so doesn’t know that it can be counter-productive to produce plans for every single possibility.

On the other hand, I’m definitely with Vaugh Palmer in that we definitely need more public education.  We are seeing money diverted from disaster planning to other areas, regardless of a supposed five-fold increase in emergency budget.  In the past five years, the professional association has been defunded, training is very limited in local municipalities, and even recruitment and “thank you” events for volunteers have almost disappeared.  Emergency planning funds shouldn’t be used to pay for capital projects.

(And the province should have been prepared for an audit in this area, since they got a warning shot last year.)

So, once again, and even more importantly, I’d recommend you all get emergency training.  I’ve said it beforeI keep saying itI will keep on saying it.

(Stephen Hume agrees with me, although he doesn’t know the half of it. )

Share

New computers – Windows 8 Phone

I was given a Win8Phone recently.  I suppose it may seem like looking a gift horse in the mouth to review it, but:

I must say, first off, that the Nokia Lumia has a lot of power compared to my other phone (and Android tablets), so I like the responsiveness using Twitter.  The antenna is decent, so I can connect to hotspots, even at a bit of a distance.  Also, this camera is a lot better than those on the three Android machines.

I’m finding the lack of functionality annoying.  There isn’t any file access on the phone itself, although the ability to access it via Windows Explorer (when you plug the USB cable into a Windows 7 or 8 computer) is handy.

I find the huge buttons annoying, and the interface for most apps takes up a lot of space.  This doesn’t seem to be adjustable: I can change the size of the font, but only for the content of an app, not for the frame or surround.

http://www.windowsphone.com/en-us/how-to/wp8 is useful: that’s how I found out how to switch between apps (hold down the back key and it gives you a set of
icons of running/active apps).

The range of apps is pathetic.  Security aside (yes, I know a closed system is supposed to be more secure), you are stuck with a) Microsoft, or b) completely unknown software shops.  You are stuck with Bing for search and maps: no Google, no Gmail.  You are stuck with IE: no Firefox, Chrome, or Safari.  Oh, sorry, yes you *can* get Firefox, Chrome, and Safari, but not from Mozilla, Google, or Apple: from developers you’ve never heard of.  (Progpack, maker(s) of the Windows Phone store version of Safari, admits it is not the real Safari, it just “looks like it.”)  You can’t get YouTube at all.  No Pinterest, although there is a LinkedIn app from LinkedIn, and a Facebook app–from Microsoft.

It’s a bit hard to compare the interface.  I’m comparing a Nokia Lumia 920 which has lots of power against a) the cheapest Android cell phone Bell had when I had to upgrade my account (ver 2.2), b) an Android 4.3 tablet which is really good but not quite “jacket” portable, and c) a Digital2 Android 4.1 mini-tablet which is probably meant for children and is *seriously* underpowered.

Don’t know whether this is the fault of Windows or the Nokia, but the battery indicators/indications are a major shortcoming.  I have yet to see any indication that the phone has been fully charged.  To get any accurate reading you have to go to the battery page under settings, and even that doesn’t tell you a heck of a lot.  (Last night when I turned it off it said the battery was at 46% which should be good for 18 hours.  After using it four times this morning for a total of about an hour screen time and two hours standby it is at 29%.)

(When I installed the Windows Phone app on my desktop, and did some file transfers while charging the phone through USB I found that the app has a battery level indicator on most pages, so that’s helpful.)

Share

Enhanced Nigerian scam – linkedin style

Linkedin is a much better platform for Nigerian scammers: They now have my first and last name, information about me, etc. So they can craft the following letter (sent by this guy):

Hello Aviram Jenik,

I am Dr Sherif Akande, a citizen of Ghana, i work with Barclay’s Bank Ltd, Ghana. I have in my bank Existence of the Amount of money valued at $8.400,000.00, the big hurt Belongs to the customer, Peter B.Jenik, who Happen To Have The Same name as yours. The fund is now without any Claim Because, Peter B.Jenik, in a deadly earthquake in China in 2008. I want your cooperation so that bank will send you the fund as the beneficiary and located next of kin to the fund.

This transaction will be of a great mutual assistance to us. Send me your reply of interest so that i will give you the details. Strictly send it to my private email account {sherifakande48@gmail.com} or send me your email address to send you details of this transaction.

At the receipt of your reply, I will give you details of the transaction.I look forward to hear from you. I will send you a scan copy of the deposit certificate.

Send me an email to my private email account {sherifakande48@gmail.com}for more details of the transaction.

Sincerely,
Best Regard’s
Dr Sherif Akande.
Here is my number +233548598269

Share

Card fraud and other details

A family member recently encountered credit card fraud.  That isn’t unusual, but there were some features of the whole experience that seemed odd.

First off, the person involved is certain that the fraud relates to the use of the card at a tap/RFID/proximity reader.  The card has been in use for some time, but the day before the fraudulent charges the card was used, for the first time, at a gas pump with a “tap” reader.

(I suspect this is wrong.  The card owner feels that gas pumps, left unattended all night, would be a prime target for reader tampering.  I can’t fault that logic, but the fact that an address was later associated with use of the card makes me wonder.)

At any rate, the day after the gas was purchased, two charges were made with the credit card.  One was for about $600.00, and was with startech.com, a supplier of computer parts, particularly cables, based in Ontario.  The other charge was for almost $4000.00, and was with megabigpower.com, which specializes in hardware devices for Bitcoin mining, and operates out of Washington state.  (Given the price list, this seems consistent with about 8 Bitcoin mining cards, or about 20 USB mining devices.)  The credit card company was notified, and the card voided and re-issued.

A few days after that, two boxes arrived–at the address of the cardholder.  One came from startech.com via UPS and was addressed to John Purcer, the other was from megabigpower.com via Fedex and was addressed to Tom Smyth.  Both were left at the door, refused and returned to the delivery companies.  (At last report, the cardholder was trying to get delivery tracking numbers to ensure that the packages were returned to the companies.)

As noted previously, this is where I sat up.  Presumably a simple theft of the card data at a reader could not provide the cardholder’s address data.  An attempt might be made to ensure that the “ship to” address is the same as the “bill to” address (one of the companies says as much on its billing page), but I further assume that a call to the credit card company with a “hey, I forgot my address” query wouldn’t fly, and I doubt the credit card company would even give that info to the vendor company.

One further note: I mentioned to the cardholder that it was fortunate that the shipment via UPS was from the Canadian company, since UPS is quite unreasonable with charges (to the deliveree) involving taking anything across a border.  (When I was doing a lot more book reviews in the old days, I had to add a standard prohibition against using UPS to all my correspondence with companies outside Canada.)  When UPS was contacted about this delivery, the agent reported that the package was shown as delivered, with a note of “saw boy,” presumably since the cardholder’s son was home, or in the vicinity of the house, at the time of delivery.  The cardholder was understandably upset and asked to have that note taken off the record, and was then told a) the record could not be changed, and b) that was a standard code, presumably built-in to the tracking devices the drivers carry.

Just a note to those of you who care anything about privacy …

Share

Best CTF in the history of CTFs ;)

This is a ton of fun, and a great tool for learning. Enjoy!

http://www.matasano.com/matasano-square-microcontroller-ctf/

Share

Cyberbullying, anonymity, and censorship

Michael Den Tandt’s recent column in the Vancouver Sun is rather a melange, and deserves to have a number of points addressed separately.

First, it is true that the behaviours the “cyberbullying” bill address, those of spreading malicious and false information widely, generally using anonymous or misleading identities, do sound suspiciously close to those behaviours in which politicians engage themselves.  It might be ironic if the politicians got charged under the act.

Secondly, whether bill C-13 is just a thinly veiled re-introduction of the reviled C-30 is an open question.  (As one who works with forensic linguistics, I’d tend to side with those who say that the changes in the bill are primarily cosmetic: minimal changes intended to address the most vociferous objections, without seriously modifying the underlying intent.)

However, Den Tandt closes with an insistence that we need to address the issue of online anonymity.  Removing anonymity from the net has both good points and bad, and it may be that the evil consequences would outweigh the benefits.  (I would have thought that a journalist would have been aware of the importance of anonymous sources of reporting.)

More importantly, this appeal for the banning of anonymity betrays an ignorance of the inherent nature of networked communitcation.  The Internet, and related technologies, have so great an influence on our lives that it is important to know what can, and can’t, be done with it.

The Internet is not a telephone company, where the central office installs all the wires and knows at least where (and therefore likely who) a call came from.  The net is based on technology whish is designed, from the ground up, in such a way that anyone, with any device, can connect to the nearest available source, and have the network, automatically, pass information to or from the relevant person or site.

The fundamental technology that connects the Internet, the Web, social media, and pretty much everything else that is seen as “digital” these days, is not a simple lookup table at a central office.  It is a complex interrelationship of prototcols, servers, and programs that are built to allow anyone to communicate with anyone, without needing to prove your identity or authorization.  Therefore, nobody has the ability to prevent any communication.

There are, currently, a number of proposals to “require” all communications to be identified, or all users to have an identity, or prevent anyone without an authenticated identity from using the Internet.  Any such proposals will ultimately fail, since they ignore the inherent foundational nature of the net.  People can voluntarily participate in such programs–but those people probably wouldn’t have engaged in cyberbullying in any case.

John Gilmore, one of the people who built the basics of the Internet, famously stated that “the Internet interprets censorship as damage and routes around it.”  This fact allows those under oppressive regimes to communicate with the rest of the world–but it also means that pornography and hate speech can’t be prevented.  The price of reasonable commuincations is constant vigilance and taking the time to build awareness.  A wish for a technical or legal shortcut that will be a magic pill and “fix” everything is doomed to fail.

Share

BananaGlee

BananaGlee. I just love saying that word ;)

So, was reading up on the NSA backdoors for Cisco and other OSes, http://cryptome.org/2014/01/nsa-codenames.htm, and got to thinking about how the NSA might exfiltrate their data or run updates…It’s gotta be pretty stealthy, and I’m sure they have means of reflecting data to/from their Remote Operations Center (ROC) in such a way that you can’t merely look at odd destination IPs from your network.

This got me thinking about how I would find such data on a network. First off, obviously, I’d have to tap the firewall between firewall and edge router. I’d also want to tap the firewall for all internal connections. Each of these taps would be duplicated to a separate network card on a passive device.

1) eliminate all traffic that originated from one interface and went out another interface. This has to be an exact match. I would think any changes outside of TTL would be something that would have to be looked at.

2) what is left after (1) would have to be traffic originating from the firewall (although not necessarily using the firewalls IP or MAC). That’s gotta be a much smaller set of data.

3) With the data set from (2), you’ve gotta just start tracing through each one.

This would, no doubt, be tons of fun. I don’t know how often the device phones home to the ROC, what protocol they might use , etc…

If anyone has any ideas, I’d love to hear them. I find this extremely fascinating.

dmitry.chan@gmail.com

Share

Crafting a Pen Testing Report

You close the lid of your laptop; it’s been a productive couple of days. There are a few things that could be tightened up, but overall the place isn’t doing a bad job. Exchange pleasantries with the people who have begrudgingly given up time to escort you, hand in your visitors badge and head for the door. Just as you feel the chill of outside against your skin, you hear a muffed voice in the background.

“Hey, sorry, I forgot to ask, when can we expect the report?”

Sound familiar?

Ugh, the report. Penetration testing’s least favorite cousin, but ultimately, one of the most important.

There are thousands of books written about information security and pen testing. There are hundreds of hours of training courses that cover the penetration testing process. However, I would happily wager that less than ten percent of all the material out there is dedicated to reporting. This, when you consider that you probably spend 40-50% of the total duration of a pen test engagement actually writing the report, is quite alarming.

It’s not surprising though, teaching someone how to write a report just isn’t as sexy as describing how to craft the perfect buffer overflow, or pivot round a network using Metasploit. I totally get that, even learning how the TCP packet structure works for the nineteenth time sounds like a more interesting topic.

A common occurrence amongst many pen testers. Not allowing enough time to produce a decent report.

No matter how technically able we are as security testers, it is often a challenge to explain a deeply technical issue to someone who may not have the same level of technical skill. We are often guilty of making assumptions that everyone who works in IT has read the same books, or has the same interests as us. Learning to explain pen test findings in a clear and concise way is an art form, and one that every security professional should take the time to master. The benefits of doing so are great. You’ll develop a better relationship with your clients, who will want to make use of your services over and over again. You’ll also save time and money, trust me. I once drove a 350 mile round trip to go and explain the contents of a penetration test report to a client. I turned up, read some pages of the report aloud with added explanations and then left fifteen minutes later. Had I taken a tiny bit more time clarifying certain issues in my report, I would have saved an entire day of my time and a whole tank of gas.

Diluted: “SSH version one should be disabled as it contains high severity vulnerabilities that may allow an attacker already on the network to intercept and decrypt communications, although the risk of an attacker gaining access to the network is very low, so this reduces the severity.”

Clarified: “It is advisable to disable SSH version one on these devices, failure to do so could allow an attacker with local network access to decrypt and intercept communications.”

Why is a penetration test report so important?

Never forget, penetration testing is a scientific process, and like all scientific processes it should be repeatable by an independent party. If a client disagrees with the findings of a test, they have every right to ask for a second opinion from another tester. If your report doesn’t detail how you arrived at a conclusion, the second tester will have no idea how to repeat the steps you took to get there. This could lead to them offering a different conclusion, making you look a bit silly and worse still, leaving a potential vulnerability exposed to the world.

Bad: “Using a port scanner I detected an open TCP port”.

    Better: “Using Nmap 5.50, a port scanner, I detected an open TCP port using the SYN scanning technique on a selected range of ports. The command line was: nmap –sS –p 7000-8000.”

The report is the tangible output of the testing process, and the only real evidence that a test actually took place. Chances are, senior management (who likely approved funding for the test) weren’t around when the testers came into the office, and even if they were, they probably didn’t pay a great deal of attention. So to them, the report is the only thing they have to go on when justifying the expense of the test. Having a penetration test performed isn’t like any other type of contract work. Once the contract is done there is no new system implemented, or no new pieces of code added to an application. Without the report, it’s very hard to explain to someone what exactly they’ve just paid for.

Who is the report for?

While the exact audience of the report will vary depending on the organization, it’s safe to assume that it will be viewed by at least three types of people.

Senior management, IT management and IT technical staff will all likely see the report, or at least part of it. All of these groups will want to get different snippets of information. Senior management simply doesn’t care, or doesn’t understand what it means if a payment server encrypts connections using SSL version two. All they want to know is the answer to one simple question “are we secure – yay or nay?”

IT management will be interested in the overall security of the organization, but will also want to make sure that their particular departments are not the cause of any major issues discovered during testing. I recall giving one particularly damming report to three IT managers. Upon reading it two of them turned very pale, while the third smiled and said “great, no database security issues then”.

IT staff will be the people responsible for fixing any issues found during testing. They will want to know three things. The name of the system affected, how serious the vulnerability is and how to fix it. They will also want this information presented to them in a way that is clear and organized. I find the best way is to group this information by asset and severity. So for example, “Server A” is vulnerable to “Vulnerability X, Y and Z. Vulnerability Y is the most critical”. This gives IT staff half a chance of working through the list of issues in a reasonable timeframe. There is nothing worse than having to work your way backwards and forwards through pages of report output to try and keep track of vulnerabilities and whether or not they’ve been looked at.

Of course, you could always ask your client how they would like vulnerabilities grouped. After all, the test is really for their benefit and they are the people paying! Some clients prefer to have a page detailing each vulnerability, with affected assets listed under the vulnerability title. This is useful in situations where separate teams may all have responsibilities for different areas of a single asset. For example, the systems team runs the webserver, but the development team writes the code for the application hosted on it.

Although I’ve mentioned the three most common audiences for pen test reports, this isn’t an exhaustive list. Once the report is handed over to the client, it’s up to them what they do with it. It may end up being presented to auditors, as evidence that certain controls are working. It could be presented to potential customers by the sales team. “Anyone can say their product is secure, but can they prove it? We can, look here is a pen test report”.

Reports might even end up getting shared with the whole organization. It sounds crazy, but it happens. I once performed a social engineering test, the results of which were less than ideal for the client. The enraged CEO shared the report with the whole organization, as a way of raising awareness of social engineering attacks. This was made more interesting, when I visited that same company a few weeks later to deliver some security awareness training. During my introduction, I explained that my company did security testing and was responsible for the social engineering test a few weeks back. This was greeted with angry stares and snide comments about how I’d gotten them all into trouble. My response was, as always, “better to give me your passwords than a genuine bad guy”.

What should the report contain?

Sometimes you’ll get lucky and the client will spell out exactly what they want to see in the report during the initial planning phase. This includes both content and layout. I’ve seen this happen to extreme levels of detail, such as what font size and line spacing settings should be used. However, more often than not, the client won’t know what they want and it’ll be your job to tell them.

So without further ado, here are some highly recommended sections to include in pen test reports.

  • A Cover Sheet. This may seem obvious, but the details that should be included on the cover sheet can be less obvious. The name and logo of the testing company, as well as the name of the client should feature prominently. Any title given to the test such as “internal network scan” or “DMZ test” should also be up there, to avoid confusion when performing several tests for the same client. The date the test was performed should appear. If you perform the same tests on a quarterly basis this is very important, so that the client or the client’s auditor can tell whether or not their security posture is improving or getting worse over time. The cover sheet should also contain the document’s classification. Agree this with the client prior to testing; ask them how they want the document protectively marked. A penetration test report is a commercially sensitive document and both you and the client will want to handle it as such.
  • The Executive Summary. I’ve seen some that have gone on for three or four pages and read more like a Jane Austen novel than an abbreviated version of the report’s juicy bits. This needs to be less than a page. Don’t mention any specific tools, technologies or techniques used, they simply don’t care. All they need to know is what you did, “we performed a penetration test of servers belonging to X application”, and what happened, “we found some security problems in one of the payment servers”. What needs to happen next and why “you should tell someone to fix these problems and get us in to re-test the payment server, if you don’t you won’t be PCI compliant and you may get a fine”. The last line of the executive summary should always be a conclusion that explicitly spells out whether or not the systems tested are secure or insecure, “overall we have found this system to be insecure”. It could even be just a single word.

A bad way to end an executive summary: “In conclusion, we have found some areas where security policy is working well, but other areas where it isn’t being followed at all. This leads to some risk, but not a critical amount of risk.”

A better way: “In conclusion, we have identified areas where security policy is not being adhered to, this introduces a risk to the organization and therefore we must declare the system as insecure.”

  • Summary of Vulnerabilities. Group the vulnerabilities on a single page so that at a glance an IT manager can tell how much work needs to be done. You could use fancy graphics like tables or charts to make it clearer – but don’t overdo it. Vulnerabilities can be grouped by category (e.g. software issue, network device configuration, password policy), severity or CVSS score –the possibilities are endless. Just find something that works well and is easy to understand.

  • Test Team Details. It is important to record the name of every tester involved in the testing process. This is not just so you and your colleagues can be hunted down should you break something. It’s a common courtesy to let a client know who has been on their network and provide a point of contact to discuss the report with. Some clients and testing companies also like to rotate the testers assigned to a particular set of tests. It’s always nice to cast a different set of eyes over a system. If you are performing a test for a UK government department under the CHECK scheme, including the name of the team leader and any team members is a mandatory requirement.
  • List of the Tools Used. Include versions and a brief description of the function. This goes back to repeatability. If anyone is going to accurately reproduce your test, they will need to know exactly which tools you used.

  • A copy of the original scope of work. This will have been agreed in advance, but reprinting here for reference purposes is useful.
  • The main body of the report. This is what it’s all about. The main body of the report should include details of all detected vulnerabilities, how you detected the vulnerability, clear technical expiations of how the vulnerability could be exploited, and the likelihood of exploitation. Whatever you do, make sure you write your own explanations, I’ve lost count of the number of reports that I’ve seen that are simply copy and paste jobs from vulnerability scanner output. It makes my skin crawl; it’s unprofessional, often unclear and irrelevant. Detailed remediation advice should also be included. Nothing is more annoying to the person charged with fixing a problem than receiving flakey remediation advice. For example, “Disable SSL version 2 support” does not constitute remediation advice. Explain the exact steps required to disable SSL version 2 support on the platform in question. As interesting as reading how to disable SSL version 2 on Apache is, it’s not very useful if all your servers are running Microsoft IIS. Back up findings with links to references such as vendor security bulletins and CVE’s.

Getting the level of detail in a report right is a tricky business. I once wrote a report that was described as “overwhelming” because it was simply too detailed, so on my next test I wrote a less detailed report. This was subsequently rejected because it “lacked detail”. Talk about moving the goalposts. The best thing to do is spend time with the client, learn exactly who the audience will be and what they want to get out of the report.

Final delivery.

When a pilot lands an airliner, their job isn’t over. They still have to navigate the myriad of taxiways and park at the gate safely. The same is true of you and your pen test reports, just because its finished doesn’t mean you can switch off entirely. You still have to get the report out to the client, and you have to do so securely. Electronic distribution using public key cryptography is probably the best option, but not always possible. If symmetric encryption is to be used, a strong key should be used and must be transmitted out of band. Under no circumstances should a report be transmitted unencrypted. It all sounds like common sense, but all too often people fall down at the final hurdle.

Share

CyberSec Tips: Email – Spam – Phishing – example 3 – credit checks

A lot of online security and anti-fraud checklists will tell you to check your credit rating with the credit rating reporting companies.  This is a good idea, and, under certain conditions, you can often get such reports free of charge from the ratings companies.

However, you should never get involved with the promises of credit reports that come via spam.

Oddly, these credit report spam messages have very little content, other than a URL, or possibly a URL and some extra text (which usually doesn’t display) meant only to confuse the matter and get by spam filters.  There are lots of these messages: today I got five in only one of my accounts.

I checked one out, very carefully.  The reason to be careful is that you have no idea what is at the end of that URL.  It could be a sales pitch.  It could be an attempt to defraud you.  It could be “drive-by” malware.  In the case I tested, it redirected through four different sites before finally displaying something.  Those four different sites could simply be there to make it harder to trace the spammers and fraudsters, but more likely they were each trying something: registering the fact that my email address was valid (and that there was a live “sucker” attached to it, worth attempting to defraud), installing malware, checking the software and services installed on my computer, and so forth.

It ended up at a site listing a number of financial services.  The domain was “simply-finances.com.”  One indication that this is fraudulent is that the ownership of this domain name is deeply buried.  It appears to be registered through GoDaddy, which makes it hard to check out with a normal “whois” request: you have to go to GoDaddy themselves to get any information.  Once there you find that it is registered through another company called Domains By Proxy, who exist solely to hide the ownership of domains.  Highly suspicious, and no reputable financial company would operate in such a fashion.

The credit rating link sent me to a domain called “transunion.ca.”  The .ca would indicate that this was for credit reporting in Canada, which makes sense, as that is where I live.  (One of the redirection sites probably figured that out, and passed the information along.)  However, that domain is registered to someone in Chicago.  Therefore, it’s probably fraud: why would someone in Chicago have any insight on contacts for credit reporting for Canadians?

It’s probably fraudulent in any case.  What I landed on was an offer to set me up for a service which, for $17 per month, would generate credit ratings reports.  And, of course, it’s asking for lots of information about me, definitely enough to start identity theft.  There is no way I am signing up for this service.

Again, checking out your own credit rating is probably a good idea, although it has to be done regularly, and it only really detects fraud after the fact.  But going through offers via spam is an incredibly bad idea.

Share

CyberSec Tips: Email – Spam – check your filters

Spam filters are getting pretty good these days.  If they weren’t, we’d be inundated.

But they aren’t perfect.

It’s a good idea to check what is being filtered out, every once in a while, to make sure that you are not missing messages you should be getting.  Lots of things can falsely trigger spam filters these days.

Where and how you check will depend on what you use to read your email.  And how you report that something is or isn’t spam will depend on that, too.

If you use the Web based email systems, like Gmail, Yahoo, Outlook/Hotmail, or others, and you use their Web interface, the spam folder usually is listed with other folders, generally to the left side of the browser window.  And, when you are looking at that list, when you select one of the messages, somewhere on the screen, probably near the top, is a button to report that it isn’t spam.

It’s been a couple of weeks since I did this myself, so I checked two of my Webmail accounts this morning.  Both of them had at least one message caught in the spam trap that should have been sent through.  Spam filtering is good, but it isn’t perfect.  You have to take responsibility for your own safety.  And that means checking the things you use to keep you safe.

Share

Review of “cloud drives” – Younited – pt 3

Yesterday I received an update for the Younited client–on the Win7 machine.  The XP machine didn’t update, nor was there any option to do so.

This morning Younited won’t accept the password on the Win7 machine: it won’t log on.  Actually, it seems to be randomly forgetting parts of the password.  As with most programs, it doesn’t show the password (nor is there any option to show it), the password is represented by dots for the characters.  But I’ll have seven characters entered (with seven dots showing), and, all of a sudden, only three dots will be showing.  Or I’ll have entered ten, and suddenly there are only two.

Share

Review of “cloud drives” – Younited – pt 2

My major test of the Younited drive took a few days, but it finally seems to have completed.  In a less than satisfactory manner.

I “synched” a directory on my machine with the Younited drive.  As noted, the synching ran for at least two days.  (My mail and Web access was noticeably slow during that time.)  The original directory, with subdirectories, contained slightly under 7 Gigs of material (the quota for basic Younited drives is said to be 10 G) in slightly under 2,800 files.  The transfer progress now shows 5,899 files transferred, and I’m out of space.

A quick check shows that not all files are on the Younited drive.

Share