Home » Article » Tech Article » Avoiding NSA Trap

Avoiding NSA Trap

National-Security-Agency Avoiding NSA Trap

National Security Agency

“We never give up”, National Security Agency (NSA) boasted in one of their strategy manifestos from Edward Snowden’s repository, which gave many a glimpse into the year 2016. Journalist Glenn Greenwald, whom Snowden confided in, is sure about one thing: if things don’t change, then the US intelligence service will aim for total surveillance of the web. Without any aids such as Open Source software and Open Source encryption, this would not only jeopardise privacy on the Internet but eradicate it completely. The question set before us is: how will privacy on the Internet prevail and who will fight for it. So far, the intelligence service has been unrelenting in building its resources despite all the criticism. The huge Utah Data Centre started operations in mid-2013 and is constantly being upgraded since then. Experts are in dispute about its data capacity: prognoses range from exaggerated Yottabytes (billions of terabytes) to realistic 12 Exabytes (12 million terabyte). The latter seems to be small when compared to the fact that more than one Exabyte of data is transferred on a daily basis worldwide. However, the NSA operates several similar facilities and is in the process of expansion (again). A data centre in Fort Meade is planned for 2016; it would be half the size of the complex in Utah. The NSA has sufficient capacity to store all the global connection data forever and a snapshot of the content at least for a certain amount of time.

There is no scarcity of instruments used for data collection and they can be roughly categorised into the following groups:

  1. Hacking of central interfaces.
  2. Cooperation between intelligence services.
  3. Secret court orders that forces a third party to cooperate.

A blind collectomania without limits

Stop-Online-Spying Avoiding NSA TrapIt is a long known fact that the British Government Communications Headquarter (GCHQ) taps the Atlantic sea cables. There was news about hacks in the SEA-ME-WE 3 that connects Europe with Asia. It is operated by BICS (Belgacom International Carrier Services), a subsidiary of the Belgian mobile communication provider Belgacom. A positive side effect for the NSA: as a result of the hacks, the GCHQ has been providing NSA access to international mobile communication data via the Belgacom roaming router since 2010. But the NSA also has official ways of accessing the personal data of Internet users.

The Foreign Intelligence Surveillance Court (FISC), a court, whose sessions are closed to the public, obligates providers and IT service providers to cooperate with the NSA. These include globally operating backbone providers such as Level 3, who manage the web traffic in the background and connect small sub-networks. The FISC has also ordered big web services such as Google, Microsoft and Yahoo to cooperate. In other words, they cannot really guarantee their users a private Cloud. The same is true for their proprietary software such as Microsoft’s Skype. But the real horror is when the NSA targets a particular surfer (refer to info on the right) – there is nothing that can defend the surfer in this case. It is evident that the means and methods used by the NSA are inexhaustible and its secret agents are already speaking of its “golden age”. But if you think or feel that these methods satiate the curiosity and hunger for data of the NSA, you obviously aren’t aware of the “SIGINT Strategy” paper.

The acronym stands for “Signal Intelligence” and the agency has penned down its aim for 2016 in this document: collecting data about every user, anytime, anywhere. The paper names all the traces that a surfer leaves behind on the Internet. If new business areas for data and communication services come up, the agency intends to develop the necessary spy tools immediately. According to the NSA, there are only two hurdles in its way: politics and encryption. It fears that authorities and politics will not keep pace with the dynamics of the information age and that the legislation, as defined by the NSA, will be modified too slowly for NSA’s liking. But the NSA is not afraid of laws that curb its activities – and rightly so: US president Obama’s plans for an NSA reform don’t seem to be very rigid. They do focus on more transparency of the FISC, but do not influence the tapping and collecting of private data in any way.

A strong encryption technology would prevent data spying on a large scale. To prevent this, the NSA wants to influence the commercial encryption market from behind the curtains (we cover more about this later in the article) and build its own arsenal against crypto-technologies. The secret service wants to be able to monitor everything even after encryption of data traffic becomes a common practice in future.

Escaping from total surveillance

But in 2016, the world could be very different from what the NSA envisions. It is possible that the global network could be split up into regional centres. The former Chief Executive of German telecommunications company, Deutsche Telekom AG, René Obermann has made such an assumpton. But the idea of a sole state network such as the Schengen Information System (SIS) or even German network has already received strong criticism. Routing web communication so that it primarily circulates within the Schengen states or only within Germany is contrary to the current routing of data streams on the web.

From an organisational point of view, the Internet is made up of many independent and autonomous systems (AS), which control one or more IP networks and are each operated by a provider, where the providers can be big or small. Companies like Level 3 or Verizon, who operate a global backbone, function at the topmost level (Tier 1). The lowest level (Tier 3) is made up of local DSL providers, whose services are used by most users for accessing the Internet – but Telekom considers itself to be a Tier 1 provider. Generally, data traffic runs through several AS networks, where the providers enter into peering contracts that determine the transit costs.

The transfer from one AS to the next often takes place at Internet nodes, for example, the DE-CIX in Frankfurt. This task is also performed by globally operating American companies such as Verizon, which forward all possible data flows, even the local ones. The Verizon network is made up of more than 700,000 kilometres of fibre optic cables, including 80 submarine cables. Thus it is possible that Verizon transfers traffic from one German provider to the other. How the data would flow – whether only through Germany or with some detours – is decided by Verizon depending on technical factors such as the load on its own cables, the router or the transfer costs.

SIS: the wrong diversion

A governmental database such as the SIS would ignore these technical constraints and enforce local routing, thereby increasing the risk of quicker bottlenecks. On the other hand, providers such as Telekom could continue to link the local providers with its own network, of course in return for a corresponding remuneration.

But this would create problems for the large, open web nodes like DE-CIX. DE-CIX CEO, Harald Summa has therefore condemned the suggestion of the Telekom as pure “marketing action that misleads politics”. Plus there would be little to gain in having a Schengen network because it would still be possible to tap cables and because majority of the web communication isn’t limited to it in any case. The best example of this is a Google query.

Google operates two of its thirteen data centres in Ireland and Finland, but it also creates regular backups at a global level via the internet network or routes queries to other data centres, for example, those in USA. All the big Internet companies like Google, Yahoo and Microsoft have their own cables for internal data traffic. They are not connected to the Internet and have long stopped using encryption (refer to the right). Google and co. were not amused when they discovered that the NSA taps these cables too.

Some companies like Microsoft and Yahoo have already taken action and have started encrypting internal data transfers. More and more services are now offering a protected HTTPS connection to the user. This is no surprise because according to the market analysts at Forrester Research, the big American service providers are at a risk of losing close to US$180 billion by 2016 thanks to the Snowden revelations.

Encryption: the last resort

Experts consider permanent encryption to be the best and almost the only way against this NSA monitoring. However, even here, the devil (read NSA) lies in the detail. Until now, the world has been following USA’s direction when it comes to implementing new cryptographic standards. Practically the entire IT industry follows the guidelines of the National Institute of Standards and Technology (NIST), also because they are binding for US authorities. These guidelines ultimately determine the efficiency of encryption software.

But the NIST is obliged to cooperate with NSA because the intelligence service is also commissioned to secure the national network structure. The Snowden revelations confirmed the persistent suspicion that the generator for random numbers, Dual_EC_DRBG, has an NSA backdoor. What’s more, the NSA is supposed to have paid RSA Security US$10 million to activate the Dual_EC_DRBG as the standard generator in its encryption library BSAFE. RSA Security is not just any company; it was founded by the research scientists, who invented the asymmetric encryption algorithm RSA, which is used in almost all HTTPS connections. If general encryption of the Internet communication has to be a way out of the NSA network, the cryptographic foundations must first be revised, at least for Europe. For this we have the European Union Agency for Network and Information Security (ENISA), an institution that also issues cryptography guidelines. The last recommendation made by ENISA in November is to avoid the insecure RC4 method wherever possible. NSA-critical security experts like Jacob Appelbaum or Bruce Schneier think it possible that the NSA cracks many RC4 variants in real time. Yet, most servers continue to use RC4 when establishing an encrypted HTTPS connection in day-to-day activities on the web (see right). Thus, using secure algorithms is just the first step. The second and far more important step is their constant use. The Internet Engineering Task Force (IETF) could provide an indication of this; it is developing a new version of the web protocol HTTP.

According to Mark Nottingham, chairman of the project group HTTPbis, Firefox and Chrome developers first thought of permanently encrypting HTTP 2.0. The biggest hurdle here is the obligatory server authentication with a certificate. This guarantees the user the identity of the server but costs money. This is not a problem for big web services like Google, but small providers cannot afford it. Nottingham favours a solution that goes beyond the standards: here, encryption would not be obligatory for servers that support HTTP 2.0. But unlike HTTP 1.1, browsers themselves would need to demand an encrypted connection from the server. If the server cannot do so, an unencrypted connection will be established via 1.1 instead of HTTP 2.0 and the browser will notify the user about this in an ideal scenario. The IETF is also discussing the encryption of HTTP without certificates.

Although this Relaxed TLS procedure does not protect data traffic from targeted attacks, it renders tapping of data traffic from submarine cables useless. If this procedure is combined with Perfect Forward Secrecy (PFS), no intelligence service can decrypt the taped, protected communication. Contrary to the conventional asymmetric encryption, browsers and servers with PFS communicate using a one-time key, which is deleted after use. In case of asymmetric encryption however, the server holds a general key that can be procured by an intelligence service.

Using encrypted web services

Coherent crypto-algorithms and more web encryption must be supplemented with corresponding software and services. Many Cloud services provide encryption during uploads and downloads but the data itself is saved in an unencrypted form. It would be better to have Client-Side-Encryption, which encrypts Cloud data before the upload and leaves behind the key on the PC. A series of services such as Spideroak (refer to image on the right) or Wuala specialise in this type of encryption. But popular providers such as Dropbox, who have to manage huge data volumes, have not shown particular interest in Client-Side-Encryption. It prevents deduplication, a technique for saving a file only once although it is uploaded by several users. This saves space and hence money. But at least there are plug-ins for Dropbox that can equip it with encryption. The situation is more dismal when it comes to mail services because it is difficult to implement permanent encryption on the server. To do so, sender and recipients would have to exchange the key along with the mail via PGP using asymmetric encryption. This may be possible with tools such as GnuPG in specific cases, but mail providers cannot use it as a mass-solution. Moreover, free services such as Gmail thrive on analysing mails and displaying the corresponding ads – encryption would make their spam filter ineffective. It is easier to implement permanent encryption for real time communication, for example via chat. The Bit-Torrent chat, which is currently in the alpha phase, is totally independent of a server. Here, the communication between participants is encrypted by default and the key is generated with Perfect Forward Secrecy.

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *
Email *
Website

This site uses Akismet to reduce spam. Learn how your comment data is processed.