Quelques digressions sous GPL...

Aller au contenu | Aller au menu | Aller à la recherche

"A" Grade SSL/TLS with Nginx and StartSSL

I've been spending the past 3 months doing a full review of the TLS landscape. I plan on publishing it very soon (just need to polish the doc), and in the meantime, here are a few pointers for configuring state-of-the-art TLS on Nginx.

Note for Apache users, you might want to check out kang's blog post on the matter. It's quite similar to what I'm going to talk about here, but with proper Apache config parameter.

ssllabs_jve.png SSLLabs provides a good way to test the quality of your SSL configuration. Click the image above to view the results for jve.linuxwall.info.

StartSSL free class1 certificate

For a personal site, StartSSL.com provides free certificates that are largely sufficient. Head over there and get your own cert, the process is rather straighforward and well explained on their site.

Building Nginx

This blog runs on a version of Debian that's not bleeding edge (as one would expect Debian to be). The version of OpenSSL is getting old, and so is Nginx. I had to build Nginx from source to get support for the latest ECC ciphers and OCSP stapling.

To build Nginx from source, you will need a copy of the PCRE and OpenSSL libraries:

Decompress both libraries next to the Nginx source code:

julien@sachiel:~/nginx_openssl$ ls
build_static_nginx.sh  nginx  openssl-1.0.1e  pcre-8.33

The script build_static_nginx.sh takes care of the rest. It should work out of the box, but you might have to edit the paths if you have different versions of the libraries. I builds a static version of OpenSSL into Nginx, so you don't have to install the openssl libs afterward.

#!/usr/bin/env bash
export BPATH=$(pwd)
export STATICLIBSSL="$BPATH/staticlibssl"

#-- Build static openssl
cd $BPATH/openssl-1.0.1e
make clean
./config --prefix=$STATICLIBSSL no-shared enable-ec_nistp_64_gcc_128 \
&& make depend \
&& make \
&& make install_sw

#-- Build nginx
hg clone http://hg.nginx.org/nginx
cd $BPATH/nginx
mkdir -p $BPATH/opt/nginx
hg pull
./auto/configure --with-cc-opt="-I $STATICLIBSSL/include -I/usr/include" \
--with-ld-opt="-L $STATICLIBSSL/lib -Wl,-rpath -lssl -lcrypto -ldl -lz" \
--prefix=$BPATH/opt/nginx \
--with-pcre=$BPATH/pcre-8.33 \
--with-http_ssl_module \
--with-http_spdy_module \
--with-file-aio \
--with-ipv6 \
--with-http_gzip_static_module \
--with-http_stub_status_module \
--without-mail_pop3_module \
--without-mail_smtp_module \
--without-mail_imap_module \
&& make && make install
if [ -x $NGINXBIN ]; then
    echo -e "\nNginx binary build in $BPATH/opt/nginx/sbin/nginx\n"

Server Name Identification

SNI is useful if you plan on having multiple SSL certs on the same IP address. It hads an extension to the TLS handshake that lets the client announce the hostname it wants to reach in the CLIENT HELLO. This announcement is then used by the web server to select the certificate to send back in the SERVER HELLO.

Support for SNI is built into recent versions of nginx. Use nginx -V to check:

# /opt/nginx -V
TLS SNI support enabled

Step by step Nginx configuration

The full configuration is at the end of this post. Head over there directly if you're not interested in the details.


This parameter points to file that contains the server and intermediate certificates, concatenated together. Nginx loads that file and sends its content in the SERVER HELLO message during the handshake.


This is the path to the private key.


When DHE ciphers are used, a prime number is shared between server and client to perform the Diffie-Hellman Key Exchange. I won't get into the details of Perfect Forward Secrecy here, but do know that the larger the prime is, the better the security. Nginx lets you specify the prime number you want the server to send to the client in the ssl_dhparam directive. The prime number is sent by the server to the client in the Server Key Exchange message of the handshake. To generate the dhparam, use openssl:

$ openssl dhparam -rand – 4096

A word of warning though, it appears that Java 6 does not support dhparam larger than 1024 bits. Clients that use Java 6 won't be able to connect to your site if you use a larger dhparam. (there might be issues with other libraries as well, I only know about the java one).


When a client connects multiple time to a server, the server uses session caching to accelerate the subsequent handshakes, effectively reusing the session key generated in the first handshake multiple times. This is called session resumption. This parameter sets the session timeout to 5 minutes, meaning that the session key will be deleted from the cache if not used for 5 minutes.


The session cache is a file on disk that contains all the session keys. There are obvious security issues with having sessions stored on disk, so make sure that your OS level security is appropriate. This parameter defines a shared cache of a max size of 50MB.


List the versions of TLS you wish to support. It's pretty much safe to disable SSLv3 these days, but TLSv1 is still required by a bunch of clients. Remember that clients are not only web browsers, but also libraries that might be used to crawl your site.


The ciphersuite is truly the core of an SSL configuration. Mine is very long, and I spent a ridiculous amount of time researching it. I won't get into the details of its construction here, as I'll be writing more on this in the next few weeks.


This parameter force nginx to pick the preferred cipher from its own ciphersuite, as opposed to using the one preferred by the client. This is an important option since most clients have unsafe or outdated preferences, and you'll most likely provide better security by enforcing a strong ciphersuite server-side.

HTTP Strict Transport Security

HSTS is a HTTP header that tells clients to connect to the site using HTTPS only. It enforces security, by telling clients that any HTTP URL to a given site should be ignored. The directive is cached on the client size for the duration of max-age. In this case, 182 days.


When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and take forever to download. OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server.

The solution is to allow the server to send the OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

Nginx supports OCSP stapling in two modes. The OCSP file can be downloaded and made available to nginx, or nginx itself can retrieve the OCSP record and cache it. We use the second mode in the configuration below.

The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate:

Authority Information Access: 
      OCSP - URI:http://ocsp.startssl.com/sub/class1/server/ca


Nginx has the ability to verify the OCSP record before caching it. But to enable it, a list of trusted certificate must be available in the ssl_trusted_certificate parameter.


This is a path to a file where CA certificates are concatenated. For ssl_stapling_verify to work, this file must contain the Root CA cert and the Intermediate CA certificates. In the case of StartSSL, the Root CA and Intermediate I use are here: http://jve.linuxwall.info/ressources/code/startssl_trust_chain.txt


Nginx needs a DNS resolver to obtain the IP address of the OCSP responder. In this example, I use Google's.

Full configuration

The full configuration is below, feel free to copy and paste it, and check your error logs if something fails to work.

server {

    listen 443;
    ssl on;

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    ssl_certificate /path/to/signed_cert_plus_intermediates;

    ssl_certificate_key /path/to/private_key;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
    ssl_dhparam /path/to/dhparam.pem;

    ssl_session_timeout 5m;

    ssl_session_cache shared:NginxCache123:50m;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;


    ssl_prefer_server_ciphers on;
    # Enable this if your want HSTS (recommended, but be careful)
    add_header Strict-Transport-Security max-age=15768000;
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;

    ssl_stapling_verify on;

    ## verify chain of trust of OCSP response using Root CA and Intermediate certs
    ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;

    <insert the rest of your server configuration here>

The Fox is in Brussel


600 people, that's quite a crowd! We're starting the 2nd day of the summit and I must say that the organization has been flawless so far. So congratulations to the summit team, and thank you :)

I took some photos, and will continue to do so over the weekend. Click the image below to go to the gallery.


My phone does crypto faster than my servers

AES-SubBytes.svg.png(or, at least, faster than some of my servers)

I ran some openssl speed tests today, to figure out the speed difference between AES-128 and AES-256 on multiple platforms, with and without hardware acceleration (AES-NI). AES-256 is 25% slower than AES-128 on average. If you're interested, the discussion goes on the dev-tech-crypto mailing list at https://groups.google.com/d/msg/mozilla.dev.tech.crypto/36na1B2brGU/xUMMPMgkmEMJ

But more interestingly, I ran some openssl tests on android, on my Galaxy S3 equipped with an ARMv7 (1.5GHz Dual Core Qualcomm MSM8900). And it does AES faster than 3 small servers I own. One is a Dedibox from french hosting company online.fr, that run on a VIA cpu. The second one is an AWS EC2.Medium, that exposes an AMD Opteron. And the third one is a home server that runs on Intel Atom D510.

The block size matters, but the gist of it is, the ARMv7 is at least as fast than any of the other three, and sometimes up to 50% faster.

The full table is here: http://jve.linuxwall.info/ressources/taf/aesmeasurements.txt

Of course, compared to the Intel Core i7 that equips my laptop, which supports AES-NI and can encrypt ~620MB/s, the ARM is far behind. But with a bandwidth of 68MB/s, the days of slow crypto on cellphones are over. And this is an excellent news!

Hardware RNG in Via CPU (on debibox)

4 years ago I was writing about getting an eKey to generate more entropy. Well, I never bought the eKey, and it took me 4 years to look at entropy generation again, but I found some interesting results today.

I was trying to set up /dev/hw_random on an Intel Xeon E3-1270. While /proc/cpuinfo indicates support for rdrand instructions, intel-rng refuses to load and I am still figuring out why.

While reading the readme of rng-tools, I stumbled across "The VIA Hardware RNG". Remembering that my Dedibox from online.fr uses a VIA Nano processor U2250, I ssh-ed into it, and discovered the awesomeness of this tiny CPU.

# cat /proc/cpuinfo 
processor	: 0
vendor_id	: CentaurHauls
cpu family	: 6
model		: 15
model name	: VIA Nano processor U2250 (1.6GHz Capable)
< ... snip ...>
flags		: ...  rng rng_en ...
< ... snip ...>

Note the 'rng' cpu flag above. That all that's needed to use the hwrng. Well, that and a few kernel modules:

# modprobe rng-core

# modprobe via_rng

# file /dev/hwrng 
/dev/hwrng: character special

The harware RNG is loaded in /dev/hwrng. Using the tools from the 'rng-tools' package, we can test the quality of the randomness provided.

# rngtest -c 100 < /dev/hwrng 
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

rngtest: starting FIPS tests...
rngtest: bits received from input: 2000032
rngtest: FIPS 140-2 successes: 100
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=140.395; avg=339.623; max=369.441)Kibits/s
rngtest: FIPS tests speed: (min=3.289; avg=48.125; max=57.798)Mibits/s
rngtest: Program run time: 5790927 microseconds

All FIPS tests pass. Now a small bandwidth test:

# dd if=/dev/hwrng of=awesomerandom bs=128 count=10240
10240+0 enregistrements lus
10240+0 enregistrements écrits
1310720 octets (1,3 MB) copiés, 27,9087 s, 47,0 kB/s

47kB/s, or 385024 bits per second, is pretty damn good for a random number generator! Now let's feed that into the kernel's entropy pool using rngd.

The initial entropy pool, while using SSH on the server:

# cat /proc/sys/kernel/random/entropy_avail

RNGD being installed, I configured the default parameters in /etc/default/rng-tools as shown below. I tried using the more recent viapadlock driver, but it doesn't seem to be supported on my architecture. The viakernel worked fine.

# cat /etc/default/rng-tools 
# Configuration for the rng-tools initscript
# $Id: rng-tools.default,v 2008-06-10 19:51:37 hmh Exp $

# This is a POSIX shell fragment

# Set to the input source for random data, leave undefined
# for the initscript to attempt auto-detection.  Set to /dev/null
# for the viapadlock driver.

# Additional options to send to rngd. See the rngd(8) manpage for
# more information.  Do not specify -r/--rng-device here, use
# HRNGDEVICE for that instead.
#RNGDOPTIONS="--hrng=intelfwh --fill-watermark=90% --feed-interval=1"
RNGDOPTIONS="--hrng=viakernel --fill-watermark=90% --feed-interval=1"
#RNGDOPTIONS="--hrng=viapadlock --fill-watermark=90% --feed-interval=1"

Then restart rngd:

# /etc/init.d/rng-tools restart
Stopping Hardware RNG entropy gatherer daemon: rngd.
Starting Hardware RNG entropy gatherer daemon: rngd.

# ps aux|grep rngd
root      6539  3.2  0.0  30628   616 ?        SLsl 17:42   0:00 /usr/sbin/rngd -r /dev/hwrng --hrng=viakernel --fill-watermark=90% --feed-interval=1

And, as a result, the entropy pool filled up immediatly:

# cat /proc/sys/kernel/random/entropy_avail

A simple test shows the immense difference in entropy available. The results below show that retrieving 120kB of randomness takes 3.4s with rngd enabled, and forever without it. I had to kill dd after several minutes because it was getting nowhere. As soon as I restarted rngd, the pool filled up again.

# dd if=/dev/random of=randomstuff bs=128 count=1024
768+256 enregistrements lus
768+256 enregistrements écrits
120181 octets (120 kB) copiés, 3,47491 s, 34,6 kB/s

# /etc/init.d/rng-tools stop
Stopping Hardware RNG entropy gatherer daemon: rngd.

# dd if=/dev/random of=randomstuff bs=128 count=1024
^C2+10 enregistrements lus
2+10 enregistrements écrits
450 octets (450 B) copiés, 114,238 s, 0,0 kB/s

# cat /proc/sys/kernel/random/entropy_avail

# /etc/init.d/rng-tools start
Starting Hardware RNG entropy gatherer daemon: rngd.

# cat /proc/sys/kernel/random/entropy_avail

I've had this server for 3 years, and I never thought it supported a hardware RNG. Today is a good day :)

Home made Keystore with OpenSSL and Bash

Storing password and confidential data across multiple computer is always a complex problem to solve. They are great tools, such as keypass, that can do this in a secure manner, but for some reason I always felt uneasy about giving my bank password to a 3rd party software.

So, I made my own. It's a simple file encrypted in aes-256-cbc using openssl.

Create a new encrypted file using this command:

echo "my new credential file" | openssl aes-256-cbc -e -a -salt -out mycredentials.encrypted

This creates the file "mycredentials.encrypted". You can then use the scripts below to read your credentials:


#!/usr/bin/env bash
if [[ -x "$1" || ! -r "$1" ]]; then
    echo "usage: $0 <ciphered file>"
    exit 1
CLEARTEXT=$(openssl aes-256-cbc -d -a -salt -in $SECFILE)
if [ $? -gt 0 ]; then
    echo "Wrong password, cannot decrypt"
    exit $?
    echo "$CLEARTEXT"


#!/usr/bin/env bash
# store a password in a ciphered file
if [[ $1 = "" || ! -r $1 ]]; then
    echo "usage: $0 <ciphered file>"
    exit 1

# decipher access file
CLEARTEXT=$(openssl aes-256-cbc -d -a -salt -in $SECFILE)
if [ $? -gt 0 ]; then
    echo "Wrong password, cannot decrypt"
    exit $?

# get new value to store
echo "enter value to append (1 line)"
echo -n "> "

# cipher access file and delete temporary file
echo "$UPDATED_CLEARTEXT"| openssl aes-256-cbc -e -a -salt -out $SECFILE.updated
if [ $? -gt 0 ]
    echo "Password encryption failed, password not stored in $SECFILE"
    mv "$SECFILE.updated" "$SECFILE"
    echo "information successfully encrypted and store in $SECFILE"
#clean up
CLEARTEXT=$(dd if=/dev/urandom bs=128 count=128 2>/dev/null)
PASSWD=$(dd if=/dev/urandom bs=128 count=128 2>/dev/null)
UPDATED_CLEARTEXT=$(dd if=/dev/urandom bs=128 count=128 2>/dev/null)

There is a couple of drowbacks with this method:

  • Your decrypted credentials are stored in cleartext in a BASH variable at some point, The variables are flushed at the end of the storage process, but you don't want to run this script on an untrusted machine
  • Credentials are displayed in your terminal. Make sure you don't leave that open, and use ./getsecret.sh <credential_file> | grep "bank" to retrieve only the credentials you want, and not the whole file.
  • The storage password requires that you type your password 3 times. That's the only safe way to encrypt/decrypt without passing the password on the command line.

It's definitely not a perfect solution, but I like to do things the hard way :)

- page 2 de 30 -