Confusion over SSL and 1024 bit keys

Published: 2014-10-07
Last Updated: 2014-10-07 12:35:25 UTC
by Johannes Ullrich (Version: 1)
3 comment(s)

Yesterday and today, a post on reddit.org caused quite a bit of uncertainty about the security of 1024 bit RSA keys if used with OpenSSL. The past referred to a presentation given at a cryptography conference, stating that 1024 Bit SSL keys can be factored with moderate resources (“20 minutes on a Laptop”). It was suggested that this is at least in part due to a bug in OpenSSL, which according to the post doesn't pick the random keys from the entire space available.

It looks more and more like the assertions made in the presentation are not true, or at least not as wide spread as claimed.

However, this doesn't mean you should go back to using 1024 bit keys. 1024 bit keys are considered weak, and it is estimated that 1024 keys will be broken easily in the near future due to advances in computer technology, even if no major weakness in the RSA algorithm or its implementation are found. NIST recommended phasing out 1024 bit keys at the end of last year.

So what should you do?

- Stop creating new 1024 bit RSA keys. Browsers will start to consider them invalid and many other software components already do so or will soon follow the browser's lead (I don't think any major CA will sign 1024 bit keys at this point)
- Inventory existing 1024 bit keys that you have, and consider replacing them.

There may be some holdouts. Embedded systems (again) sometimes can't create keys larger then 1024 bits. In this case, you may need to look into other controls.

With cryptograph in general, use the largest key size you can afford, for SSL, your options for RSA keys are typically 2048 and 4096 bits. If you can, got with 4096 bits.

[1] https://www.reddit.com/r/crypto/comments/2i9qke/openssl_bug_allows_rsa_1024_key_factorization_in/

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

Keywords:
3 comment(s)

Comments

The SSL library causes a very high load on servers. Many shared hosting accounts will break those invisible barriers if you use a key of 4096 bits and have decent traffic. Some private servers may take it hard as well. I agree that 1024 bits is just not enough, but I would suggest 2048 bits which seems to be fine for the foreseeable future. Also, limiting the TLS protocol connect combinations will help as there is no need for some 60+ connect strings to be tried when todays browsers may use at most 16.
The SSL library causes a very high load on servers. Many shared hosting accounts will break those invisible barriers if you use a key of 4096 bits and have decent traffic. Some private servers may take it hard as well. I agree that 1024 bits is just not enough, but I would suggest 2048 bits which seems to be fine for the foreseeable future. Also, limiting the TLS protocol connect combinations will help as there is no need for some 60+ connect strings to be tried when todays browsers may use at most 16.
The SSL library causes a very high load on servers. Many shared hosting accounts will break those invisible barriers if you use a key of 4096 bits and have decent traffic. Some private servers may take it hard as well. I agree that 1024 bits is just not enough, but I would suggest 2048 bits which seems to be fine for the foreseeable future. Also, limiting the TLS protocol connect combinations will help as there is no need for some 60+ connect strings to be tried when todays browsers may use at most 16.

BTW, it appears your website is having MySQL load issues. You should limit the config so it works when being pounded by Google, etc.

Diary Archives