Wacky Weekend: The '.secure' gTLD

Jimmy Hess mysidia at gmail.com
Sun Jun 3 21:49:47 CDT 2012


On 5/31/12, Jay Ashworth <jra at baylink.com> wrote:
> HTTP redirects funneling connections towards the appropriate TLS-encrypted
> site), use DNSSEC, and deploy DomainKeys Identified Mail (DKIM) for spam

The "Except for HTTP redirects" part is a gigantonormous hole.   A
MITM attacker on a LAN can intercept traffic to the non-HTTPS redirect
site and proxy this traffic.  The ".SECURE"  in the TLD looks like a
user interface declaration,  the user will believe that the appearance
of .SECURE means their connection is encrypted, even when it is not.

The TLD should probably not be allowed, because it is confusing, it
looks like  a User Interface Declaration, that the site is proven to
be secure,  but none of the registry's proposed measures provide a
reliable assurance;  it may lead the user to believe that ".SECURE" is
a technical indication that ensures the site is actually secure.

Even HTTPS and EV+SSL do not provide such a strong UI declaration.   A
UI declaration should not be able to be made  by the registration of a
domain alone,  the software displaying the URL should be responsible
for UI declarations.

This may result in mixed signals if a site on a  SLD under .SECURE
is actually compromised,   which is more harmful than having no UI
declaration.



  Absent a new RFC requirement for browsers to connect to .SECURE  TLD
sites using only HTTPS,
their   "Non-HTTPS Redirect to HTTPS pages"   are just as susceptible
to MITM hijacking as any non-HTTPS site.

> prevention. In addition, Artemis would employ a rigorous screening process
> to  verify registrants' identities (including reviewing articles of incorporation
> and human interviews), and routinely conduct security scans of registered
> sites. The venture has $9.6 million (US) in funding provided by Artemis'

This is expensive,  a good way to keep the TLD out of use except by
large corporations,
and is therefore of very limited value to the community.    Required
to meet a generally accepted standard of security with third party
auditing would be more useful.

"Security scans" by one provider aren't really good enough. Automated
scans cannot detect  insidious exploitability issues; they typically
detect and flag non-issues to justify their existence, and fail to
detect glaring issues such as session tracking in a manner vulnerable
to CSRF.

More importantly; remote periodic scans cannot detect compromise of
the site or ensure reasonable internal security practices, when the
impact is information leak, intruders don't always insert malware on
the front page for a scanner to pick up.


--
-JH



More information about the NANOG mailing list