Why are so few people using Google’s 2-factor authentication?

Google’s 2-factor authentication makes it exponentially harder for your Google account to be hacked, by requiring, after entry of your password, an extra code generated by a smartphone app on a smartphone that was previously linked to your Google account. Since securing your e-mail account is crucial (especially since your e-mail account is used for password retrieval/reset of very many websites and applications), I view it as something everyone with a Google account simply must use, to be able to sleep easy. Especially now you only have to enter the generated security code only once per device ever (and not every 30 days anymore, as was the case previously), I find it pretty painless to use.

However, of all the people I know with a Google account (almost everyone), only one person (besides myself) is actually using Google 2-factor authentication. Why is that? When asked (and some guessing from my side), the following reasons emerge:

  1. Never heard of it. Google 2-factor authentication isn’t very well advertised. It’s not like you get a message every time you log in without using it.
  2. Don’t care about security. Most people I know simply don’t care about online security. They vaguely hear some things about it, but it never comes up that you can actively do something about it. If some hacking event happens to themselves, it’s treated as a fact of life, that you simply can not help.
  3. Too much effort. For many people even using different, strong, passwords for every website, and using a password manager, is already way too much of an effort. Using a smartphone during login, and typing an extra code, is unthinkable.
  4. Setting up Google’s 2-format authentication is too complex. The process is pretty straightforward (you use your smartphone to take a picture of a QR code, to link your smartphone), but still a big hurdle for many people to even consider.
  5. Re-authentication after loss of phone is cumbersome. If you lose or reset your phone (or buy a new one), you first have to unlink your previous phone (using one of the recovery codes, that you hopefully printed out), before you can link your new phone. After having this done once, many people come to the conclusion never to do that again, and don’t re-activate 2-factor authentication.
  6. Application-specific passwords are hard to find and use. Some applications need to have access to your Google account without you being present to login interactively (think mail and calendar applications on desktops and devices). For this, Google has so-called application-specific passwords, which are 16-letter passwords that can be (or at least should be) used for only one application, and are displayed only once by Google (after having used them you can not view them again, you would have to generate a new one).
    Not only is the place to generate these passwords very hard to find (hidden somewhere in your Google account “Security” settings), but the whole concept of what these passwords are, and how they should be used is foreign to most users. Also, because using an application-specific password for only one application is not (and can not be) enforced, it can be a cause of security loopholes.

I am certainly not a security evangelist, but I think that Google has spend a lot of effort in making 2-factor authentication as easy and painless to use as possible (as opposed to other companies, such as Blizzard, where you have to call support, and supply credit-card info etc. to link your account to a new phone), and I think that everyone with a Google account should use it. Still, as the points above indicate, there’s still a long way to go until everyone understands the need for it, and the process becomes easy enough for absolutely everyone to use it.

Some important gotchas when starting with Amazon RDS SQL Server

RDS is the relational database service of Amazon Web Services. It is a ready-to-use cloud service: No OS or RAID to set up; you just specify the type of RDBMS you want, the memory/storage sizes, and a few minutes later you have a database instance up and running. The instance has its own (very long) DNS name, and you can access this database server from any place in the world using the standard SQL Server tools, if you give the client IP-addresses access through the “Security Group” linked to the database instance. Currently RDS is offered for Microsoft SQL Server, Oracle, and MySQL.

My company (Yuki) is currently using RDS for some smaller projects, and I’m investigating if/how our main database infrastructure could be moved to RDS, so that we can achieve improved scalability, world-wide reach, and lower maintenance costs on our database servers (which are by far the biggest bottlenecks for the web applications of Yuki). In this process I have discovered some important gotchas when using RDS SQL Server, that are not that well advertised, but can be big stumbling blocks:

  1. The sysadmin server role is not available. That’s right, you specificy a master user/password when creating the RDS instance, but this user is not in the sysadmin role; this user does have specific rights to create users and databases and such. Amazon has, of course, done this to lock down the instance. However, this can be a big problem when installing third-party software (such as Microsoft SharePoint) that requires the user to have sysadmin rights on the SQL Server.
  2. The server time is fixed to UTC. The date/time returned by the SQL Server function GETDATE is always in UTC, with no option to change this. This can give a lot of problems if you have columns with GETDATE defaults, or queries that compare date/time values in the database to the current date/time. For us, this currently is a big problem, which would require quite extensive changes in our software.
  3. No SQL Server backup or restore. Because you have no access to the file system (and the backup/restore rights are currently locked down in RDS), you can not move your data to RDS by restoring a backup. You have to use BCP or other export/import mechanisms. It also means, that you can only use the backup/restore that Amazon offers for the complete instance, meaning that you can not backup or restore individual databases. This point could easily be the biggest hurdle for many companies to move to RDS SQL Server.
  4. No storage scaling for SQL Server. Both Oracle and MySQL RDS instance can be scaled to larger storage without downtime, but for SQL Server you are stuck with the storage size you specify when you create the instance. This is a huge issue, since you have to allocate (and pay for!) the storage right at the start, when you have no idea what your requirements will be in a year’s time. This greatly undermines the whole scalability story of AWS.
  5. No failover for SQL Server. Again, both Oracle and MySQL can be installed with “Multi-AZ Deployment”, meaning there is automatic replication and failover to a server in another datacenter in the same Amazon region. No such option for SQL Server, meaning your only option in failure situations is to manually restore a backup that Amazon made of your instance.

All in all, still quite a few shortcomings, and some of which can be hurdles for deployment that can not be overcome. Personally, I love the service, as setting up the hardware and software for a reliable database server is a really complex task, which not very many people have any serious knowledge of. Let’s hope that Amazon keeps up its quick pace of innovation to improve the above points for RDS SQL Server.


My name is Sebastian Toet, I live in the Netherlands, and I’m the Chief Software Architect at Yuki, a SaaS (Software as a Service) company that offers online accounting and general business administration services. I was trained as a theoretical physicist (at the University of Technology in Eindhoven, and the Department of Theoretical Physics of the University of Amsterdam). I’m active in software development since 1993, first at Exact Software in Delft (The Netherlands), and since 2005 at the company I co-founded, that was first called FamilyWare, and is now called Yuki.

Since 1995 I have worked on web-based ERP/accounting systems, and some of my main interests lie in the following areas:

  • Relational database systems and structures for complex data, complex reporting needs, and high number of concurrent users.
  • Optimum strategies in Software Release Management for SaaS providers.
  • Workflow systems that deliver the actual service in SaaS.

The last point is pretty important to me, as many SaaS providers don’t actually provide a real service, but just provide online access to software. At Yuki my main attention goes to building great workflow systems, that deliver the actual (accounting) service that really allow users to leave the work to others.

In this blog I want to talk about, and discuss, some of the experiences and insights I have had in the last 15 years in creating web-based ERP and accounting systems, and especially about those subjects that are generally not discussed very much, or at all, live macro-scale database infrastructure  and design, internal workflow system for SaaS providers, patch management for SaaS providers , etc.

SQL Server Blog

Thoughts about SaaS software architecture

Brent Ozar Unlimited®

Thoughts about SaaS software architecture

Visual Studio Blog

Thoughts about SaaS software architecture

Microsoft Azure Blog

Thoughts about SaaS software architecture

AWS News Blog

Thoughts about SaaS software architecture

%d bloggers like this: