I still see quite a number of SQL Server deployments on Azure Virtual Machines. Often this decision is driven by the application vendor and from my experience to date a PaaS alternative such as Azure SQL isn’t a viable option purely because the vendor hasn’t tested their application with this yet and therefore will not support this deployment. This should change over time hopefully.
In previous job roles I used to do my fair share of SQL Server deployments on premises without knowing too much outside of the default configuration or how to optimise storage for performance. Since I’ve been working with Azure I’ve actually learned quite a bit more about storage and how even basic storage configuration changes can optimise performance.
In the case of SQL Server this has largely been driven by the Azure Marketplace images you will find for SQL Server and how this has evolved over time.
Here, you will choose the version and edition of SQL Server and on which OS platform (and version) you require. Yes, you can deploy SQL Server on Linux these days!

Going through the “Create a virtual machine” process you don’t need to add any data disks but when you get to the “SQL Server settings” tab you will notice a section for storage optimization. This is configured by default for Online transaction processing (OLTP).

Click on ‘Change configuration’ to adjust the optimisation settings. You will get presented with the below settings as a default.

Let’s go through the settings and explain what they all mean.
Storage optimization
The recommend approach for optimal storage performance is to store your SQL data, SQL log files and SQL tempDB files on separate disks. This provides more dedicated disk IOPS and throughput on the disk level and also allows you to configure the Azure disk host caching setting for each disk to the optimal setting for that data type. For SQL data the recommended host caching is to set this to ‘Read-only’ and for the log files this should be set to ‘None’. TempDB can be set to ‘Read-only’ also.
The very first setting you are presented with in the configure storage screen is a default configuration for your SQL server in terms of the disk setup. This configuration can be overridden but let’s look at the default settings for each option.
General – this will just deploy a single data disk which will be shared for everything, i.e. SQL data, log storage and TempDB storage. This is the budget option for when you want to keep storage costs down and are willing to forego optimal performance.
Transactional processing – This is the most common use for SQL Server and the default option here. This option will deploy separate data disks for the SQL data and logs storage with the host caching configured as outlined above. The TempDB will by default get deployed to the temporary non-persistent D drive on the virtual machine. This is OK because the tempDB does not require persistent storage and this is an efficient use of the included storage that comes provisioned with the Virtual Machine. Just make sure the local D drive that gets provisioned has a large enough capacity for your tempDB file. Otherwise provision the tempDB storage on its own disk set to Read-only caching.
Data warehousing – Disk configuration is the same as with Transaction processing however the disks will be optimised for data warehousing by using a different disk striping configuration and different trace flags.
Data storage
Disk Type
You will be presented only with options for Premium SSD or Ultra Disk (if supported) here. Premium SSD is always recommend as a minimum for SQL Server in order to obtain better performance and lower latency. If you want to use Standard SSD or even Standard HDD disks then this can be changed post deployment by deallocating the Virtual Machine and editing the account type of the disks.
Disk Size
You will see the default disk size is a 1TiB Premium SSD (P30) size. Although you may not have a requirement for this size capacity it is important to remember that the larger the disk size the more IOPS and throughput you will get. So if you are looking for performance then you should consider larger disk sizes even if you don’t need the storage capacity provided. The selector will helpfully tell you the max IOPS and throughput available for the disk size you have selected.

Maximum supported performance size
Another thing to be aware of is that each Virtual Machine size has a maximum disk IOPS and maximum disk throughput capacity. Generally the bigger (and more expensive) the Virtual Machine the higher the capacity. If you have selected larger disk sizes then it’s possible that you will exceed the maximum supported capacity of the Virtual Machine which means your disk performance will be throttled to that capacity ceiling. When selecting your disk sizes you will see a notification like the below if you reach this limit so you can adjust your disk(s) or Virtual Machine size if required.

Boosting disk IOPS and throughput
As an alternative to using larger disk sizes to ramp up the IOPS and throughput, you can instead use storage spaces on Windows Server OS to stripe multiple disks into a single spanned volume. This not only aggregates the storage but also the IOPS and throughput. This process is quite well documented elsewhere but is worth considering if you are looking at ways to increase performance further.
Note: You are still limited by the maximum IOPS and throughput capacity of the Virtual machine as mentioned earlier.