Finally: Crash-consistent multiple-volume snapshots in AWS

Last week, Amazon Web Services (AWS) announced that they now support crash-consistent snapshots of multiple EBS volumes: https://aws.amazon.com/blogs/storage/taking-crash-consistent-snapshots-across-multiple-amazon-ebs-volumes-on-an-amazon-ec2-instance/. This is a big deal, especially for database workloads, as this makes it much easier to take a consistent snapshot of all volumes (RAID, data + log, file groups, etc.) involved in a single database instance. This makes it much easier to create safe disaster recovery solutions for databases running on EC2. Big step forward for AWS!

SQL Server on Amazon Web Services: 2014 in review

I have a particular interest in the combination of Microsoft SQL Server and the cloud services of Amazon Web Services, for the following reasons:

  • For almost all web applications (and for all SaaS applications in particular) the database system is by far the most business-critical component, where absolutely nothing should go wrong, both in performance and reliability.
  • Microsoft SQL Server has always been an incredibly robust and user-friendly relational database system, and of course a no-brainer if you work within the Microsoft ecosystem. Its (unavoidable) move to core-based licensing has driven up licensing costs (and lost it some friends), but especially the licensing costs of SQL Server Web Edition are still very low (the Enterprise edition is far too expensive, though, keeping it and its features out of reach of many companies).
  • Amazon Web Services (AWS) dwarves all other cloud-service providers, and is in most cases the default choice, even when using Microsoft products. 2014 has seen a very rapid pace of innovation from Amazon. The only time that AWS hasn’t been the leader was when in march 2014 Google drastically reduced the prices for its cloud storage (to $0,026/GB/month). Only a day later AWS followed with a similar price reduction of around 60% for cloud storage.

So, in review, some of the important features and innovations of AWS in 2014, relating to SQL Server:

  1. Cloud storage price reduction (march): The prices of Amazon S3 have been reduced by around 60% (to $0,03/GB/month). This makes it even easier to store a very generous amount of SQL Server backups and snapshots in S3. It will still be a negligible amount on your monthly bill from AWS. My backup strategy is now to to make EBS snapshots of the primary SQL Server data disks every 10 minutes, make transaction-log backups every 15 minutes and snapshot the disks with the log-backups, and make nightly full SQL Server backups, and store the full backups in S3 separately.
  2. New memory-optimized instances (april): The R3 instance types use new Intel Ivy Bridge processors, and go up to 32 cores and 244 GB RAM. These instances should now be the default choice when setting up your own SQL Server instance. Some important issues, though, with these instances:
    • The R3 instances have no direct competition from Google, so they have not been included in any price reductions and are not very cheap. r3.2xlarge, with 8 cores and 61GB RAM and SQL Server Web Edition is $1.555 per hour.
    • SQL Server Web Edition (the best choice for starting web companies) can only use 64GB of RAM, so any instance type bigger than r3.2xlarge is wasted.
  3. Multi-Availability zone (mirroring) support for SQL Server in Amazon’s Relational Database Service (RDS) (may): Although I’m personally moving away from using Amazon RDS  (Amazon’s RDMS PaaS solution), mainly because its lack of backup options on the database level, and its lack of storage scalability, RDS is still a great way to start with Amazon’s cloud services. This Multi-AZ support is based on SQL Server database mirroring, which brings a few issues:
    • SQL Server mirroring is not supported for the SQL Server Web Edition, so no Multi-AZ if you use this version of SQL Server.
    • Microsoft seems to be moving away from mirroring as a disaster recovery method, towards AlwasOn availability groups.
  4. SSD-backed Elastic Block Storage (june): Previously only the instance storage (that is lost on stopping and starting an instance) could be an SSD. Now all virtual drives can (and should be) SSD storage. This is a huge gain for SQL Server, which benefits tremendously from the increased seek time of SSD. When using SSD drives on Amazon I don’t even bother anymore with reserved IOPS, because a SSD of 1 TB (currently the maximum size) already gives you 3000 IO per second.
  5. New Amazon region in Frankfurt (october)Until now, Ireland was the only region (a set of datacenters) of Amazon within the EU. With this second region, it has become an option to spread out your SQL Server instances, backups, and snapshots over multiple regions, while still keeping all data within the EU.
  6. Private DNS (november). You can now give your SQL Server instance easy-to-use network names.
  7. New payment options for reserved instances (december)In most cloud solutions, the database servers are by far the most expensive instances. This also makes the upfront costs for reserving these instances a big hurdle, even if the costs over the full term are lower. There are now new payment options for reserved instances without upfront payments, where you still get a 30% price reduction if you reserve the instance for a year.

Announced, but no yet available:

  • EBS volumes larger than 1TB. I have always been skeptical of software-based RAID-ing of virtual drives in a cloud environment, so it’s great to see that EBS volumes of up to 16TB have been announced, making life a lot easier for administrators of large SQL Server databases.

Curious about:

  • Amazon Aurora – new “cloud-optimized” relational database engine. I’m always a bit skeptical when cloud-companies feel the need to build their own “cloud-optimized” relational database engine. As we have seen with Azure SQL, companies are reluctant or unable to re-architect their applications to effectively use these database engines, and also really need the full feature set of the full database product (such as database backups, point-in-time restores, etc.). But we’ll see how this turns out.

Wishlist for 2015:

  • SQL Server Enterprise as “license-included”. Currently, Amazon does not offer server instances that includes SQL Server Enterprise. You have to buy your own licenses, that need to include Microsoft Software Assurance, and somehow certify these licenses to use them in the cloud. The upfront cost and hassle of this is simply too great. Microsoft Azure offers instances with SQL Server Enterprise included, so is there a reason why Amazon can’t or is not allowed to?
  • Faster EBS snapshots. I really like the snapshot mechanism of EBS, but at the moment they often take such a long time, that in practice you can take them (at best) only every 15 minutes (I would prefer every 5 minutes), making them less than ideal as a disaster recovery solution.
  • Make RDS SQL Server more useful. Currently, RDS SQL Server has some limitations that make it impractical for quite a few use cases:
    1. Make storage scalable. Currently the storage used by a SQL Server RDS instance can not be increased.
    2. Make database-level backups available.
    3. Lift the limit of 30 databases.
  • Improve the AWS console UI. This is a common complaint from almost all AWS users: The web-based console UI is still very primitive (especially for such crucial cases as IAM security), and it’s always a guess which feature works in which browser.

All in all, it’s been a pretty good year for all of us running SQL Server in AWS, with the general availability of SSD-drives as the highlight. Happy 2015 everyone!

The accumulating cost of digital certificates

The last few months I’ve had to go through several of the annoying certificate-renewal processes that recur every year. It’s always a painful process, not only because you always start the process too late, it’s a lot of effort and paperwork (which never goes smoothly), but you also realize that you’re paying quite a lot of money, for what exactly?
We are a small SaaS (Software as a Service) company, and we have the following digital certificates:

  • Two wildcard SSL certificates. Every SaaS company that offers a separate sub-domain for each customer needs these. We pay €775 for each wildcard certificate, with a 2-year validity (from Thawte).
  • A code-signing certificate. We offer a few rich-client applications from our web-applications, using ClickOnce technology. These applications need a digital signature. Cost of the certificate is €385 for a 2-year code-signing (Authenticode) certificate from Thawte.
  • A BAPI certificate. This is needed for (e-mail) communications for the Dutch tax authorities. Costs €240, with a validity of 3 years.
  • A PKI certificate from the Dutch government, for SOAP-webservice communications with the Dutch authorities (Government, Chambers of Commerce, Banks). Cost: €600 for a certificate with a validity of 3 years.

In my case, the renewal of Thawte’s code-signing certificate was especially painful, as it didn’t work on Windows 8, and I was send into several dead-ends by Thawte’s support, such as “try it in Firefox” (Authenticode certificate renewal only works in IE), or “start IE9 in Windows 8” (how exactly?).

All in all, it may not seem a lot of money in total, but if you include the effort it takes every time, I find it quite a burden. It would be nice if in the future you would only need a single digital certificate for the company as a whole.

Why are there no out-of-the-box high-availability solutions for SQL Server in the cloud?

When selecting any cloud service, you would like to get some guarantees about the reliability (uptime) and durability (what is the chance that your data is lost) of the service. Now, let’s see what we can find on cloud relational database services, either as a PaaS (Platform as a Service), or installed as part of an IaaS (Infrastructure as a Service, i.e. virtual machines) configuration.

  • Windows Azure SQL Database gives an uptime SLA of 99.9%. The database is replicated treefold (within the same datacenter), but Microsoft gives no SLA for durability of the data. It indicates that to secure your data against widespread datacenter failure, you will need to export your complete database yourself (as a SQL script, or a BACPAC file).
  • Amazon RDS (Relation Database Service)  has no SLA at all for uptime or durability. In the past, RDS has turned out to be the most unreliable of Amazon’s services, because it is build on top of the EC2 services (virtual machines) and the EBS service (virtual drives). RDS uses SQL Server logfiles to be able to restore a complete server instance (not separate databases) to a maximum time of 5 minutes ago. It also maintains daily snapshots of the instance, which can be retained for a maximum of 35 days. RDS also offers live replication and failover to another datacenter (Multi-AZ), but this is not available for SQL Server.
  • SQL server running on virtual machines of Amazon (EC2) or Microsoft (Azure Virtual Machines) have no SLA. There is only an SLA for the underlying virtual machines from both Amazon and Microsoft for 99,95% uptime.

So, from this I have to conclude there’s really a lot of uncertainty about the reliability and durability of any database service in the cloud. It also means that for SaaS applications where you can not afford to a have a few hours (or more) downtime you have to look into separate high-availability solutions. What is available in this area, specifically for SQL Server?

  • Windows Azure SQL Database has no built-in high-availability solution. You can copy a database to a new database in the same datacenter, but to copy it to another datacenter you have to export/import all data to a new database. Failover to a database copy will have to be managed manually.
  • Amazon RDS has a Multi-Availability-Zone option (Multi-AZ), where a database replica is maintained in another datacenter in the same region. This option, however, is only available for Oracle and MySQL, not for SQL Server. For SQL server your only option is to make instance snapshots, but you can not copy RDS snapshots to another datacenter.
  • On Amazon or Microsoft Azure virtual machines, you can implement any of the standard high-availability solutions of SQL Server (SQL Server 2012 AlwaysOn, database mirroring, log shipping). But keep in mind the following:
    • SQL Server 2012 AlwaysOn, and asynchronous database mirroring,  are only available in the Enterprise Edition of SQL Server. You will have to buy your own license for this, which is extremely price ($7000 per core).

    All of these high-availability solutions that you implement yourself on top of cloud virtual machines require a lot of expertise, ongoing management, and more than double the cost of the solution.

Conclusion: In general, for SaaS suppliers durability of data is much more important than uptime. In most cases 99,9% uptime would be sufficient (less than 1 hour downtime each month), but data durability has to be “Nine Nines” (99,9999999%), because any permanent data loss is unacceptable. Currently, none of the cloud relational database suppliers can offer this reliabiliy “out of the box” for SQL Server. Amazon RDS’s Multi-AZ solution comes closest (at almost double the price), but it is not available for SQL Server. Suggested manual solutions (based on export/import of data) are in many cases completely unrealistic for large amounts of data, because it could take days to complete an import of terabytes of data.

With all this,  are cloud relational database services currently an option for business-critical applications? I think they are, but the situation with regards to high-availability needs to improve rapidly, so that high-availability is simply an option to check. In the mean time, especially when you need to keep costs in check, we will need to rely on old-fashioned methods like log-shipping and data-export to safeguard our own data.

Amazon RDS for SQL Server has some ridiculous storage constraints

When you think about cloud computing, you think scalability, flexibility, etc. Right? Not really when you look at the Amazon Relational Database Service (RDS) for SQL Server, at least the storage part of that service. RDS is Amazon’s database PaaS (Platform as a Service) offering for relational databases, and it does a lot of things right: New instances running with a few clicks, backups fully-automated with restore to point-in-time functionality, good monitoring tools, etc. However when it comes to the storage part of the SQL Server part of RDS, there are some serious constraints:

  • You can not change the allocated data storage after creating the SQL Server instance. That’s right, you have to allocate, and pay for, the maximum storage that you will ever use in the future. Not only is this impossible to predict, but it means that you pay for a lot of unused storage, something “the cloud” was supposed to prevent.
  • When selecting guaranteed IO performance (provisioned IO per second, or IOPS) additional constraints are present:
    • The storage must be an exact multiple of 100GB. Storage costs $0,138 per GB/month.
    • The selected IOPS must be exactly 10 times the storage in GB (so with 200GB storage you must select 2000 IOPS). IOPS costs $0,11 per IOPS/month.

Now, for any serious databases use, you need reliable performance, which means that you almost always have to select the provisioned IOPS option. So if I create a database instance with 1 TB storage (the maximum, which I have to select because later expansion is not possible), I have to select the 10,000 IOPS option, and I am paying $138 per month for the storage and $1100 per month for the IOPS. In addition to that, of course, you also pay for the instance itself.

Strangely enough, the storage constraints when using Oracle and MySQL are much less strict: You can scale the storage up at any time, without downtime, and when selecting provisioned IOPS the storage can be any value between 100GB and 3TB, and the IOPS/storage factor can be any value between 3 and 10.

So what is going on here? Do Windows Server and/or SQL Server have such inherent constraints that the same storage flexibility as for Oracle/MySQL can never be achieved? Or has the Amazon RDS team just not yet had the time to fully come to grips with the Microsoft systems?

SQL Server connection timeout in .NET client also used as transaction timeout? Mmm…

Sometimes you spend several days (or nights) trying to solve a problem, only to find out your problem is caused by some inexplicable design decision you couldn’t possibly anticipate. In this case, I was investigating errors we’ve head on peak hours from our SQL Server database server, where we got “Timeout expired” errors on commit or rollback of database transactions, for the last two months.
Now, our database server probably doesn’t have enough capacity to deal with its load on peak hours, but still, we hadn’t had any errrors like this before, even at the busiest of times. Completion of transactions in SQL Server (and especially commits) have always been very fast on SQL Server, in my experience over the last 20 years.

When searching the internet, it turns out quite a few people have this same problem, but nobody knows exactly why. In the end, there’s exactly one place to find the surprising answer, provided by Matt Neerincx of the SQL Server team in a forum post:

The value used in the .NET SQLClient class for the client-side transaction timeout is the value set for the connection timeout of the database connection.

In his post, Matt explains that in this case they had no other values available in the class interface, so they decided to re-use the existing value for the connection timeout. Now, for me these two values are completely unrelated. Sometimes you want to set the connection timeout pretty low, so that un unavailable server, or incorrectly provided server name, is detected without a very long wait. I had done just that, by reducing our default connection timeout from 30 to 20 seconds, never suspecting this would have these drastic side-effects on the completion of transactions!

Ah well, problem solved, on to the next: This morning we had one of the infamous “Deadlock detected” errors in our ASP.NET web application. That’s another can of wurms, trying to make sense of the myriad of threadpool-related settings in ASP.NET.

I have to take a test before I’m allowed to give Microsoft money?

Today I was trying to renew the Microsoft Action Pack (MAP) subscription for my company. Now, the MAP really is a good deal: For €380 per year, any small company (less than 100 employees) that creates or integrates IT-solutions based on Microsoft technology gets access to almost all Microsoft software, with a small number of licenses for each product, that can be used for regular business use, including 3 full MSDN subscriptions. I had to do this renewal during my vacation, because Microsoft uses some pretty strong language for what happens if you let the subscription slip for only one day (you have to destroy all media, remove all licensed software, and even if you subscribe again later, you can never get the rights back to use older software versions).

The “interesting” part is, that just before you’re trying to use your credit card to pay, you get the message that you first have to complete an online seminar, with an accompanying assessment (test) for which you have to score 70%, before being allowed to proceed. So you have to go to 30 or so slides of an online seminar (which seems to be pretty outdated, with no mention of Windows 8 at all), trying to remember the differences between the various versions of Windows Sever 2008. This takes maybe 30 minutes, including the test. And after that, as it turns out, you have to take another online seminar and test, this time with specific questions about the Development & Design MAP (with questions like wat LINQ stands for). Not only is it a little bit absurd that to be allowed to pay money to Microsoft you have to complete some tests first, but also that it is assumed that the person who does the ordering and payment is able to follow these seminars and do the tests.

So, after an hour I passed all the tests (mostly answering “All of the above” to questions about what a specific piece of Micosoft software can do), and finally I was allowed to enter my credit-card details. Well, at least now I know that Windows Server 2008 Foundation can do nothing useful, and that the Microsoft licensing options are too complex for anyone to understand. Still love Visual Studio 2012 though (after removing the all-caps menus), so all’s good!

SQL Server Blog

Thoughts about SaaS software architecture

Brent Ozar Unlimited®

Thoughts about SaaS software architecture

The Visual Studio Blog

Thoughts about SaaS software architecture

Microsoft Azure Blog

Thoughts about SaaS software architecture

AWS News Blog

Thoughts about SaaS software architecture

%d bloggers like this: