Comparing Public Cloud Performance – Part One – Microsoft Azure

I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform for Infrastructure-as-a-Service, I’ve always struggled in finding the right balance of cost vs configuration when choosing the right platforms and have created this 3 part blog to highlight some of the differences I’ve seen between Azure, AWS and Google Cloud.

I’ve just started a new role as Cloud Architect for 1&1 IONOS, working in the Enterprise Cloud division, and one of the main factors in coming here was the technology stack and the surrounding network settings and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost-benefit that choosing the right cloud provider will be for you.

For the tests I’ve kept it simple, I will be using small instances that will host eventually host microservices with Docker so cost will be one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a baseline for testing, I will use Novabench ( for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes, I will also use the same tool for all the cloud vendors instances so this should show unbiased results too.

Let’s start by looking at Azure and for this I’ve selected the A1_v2 size as this consistent with other instances on the clouds I will be testing, The CPU used is an Intel Haswell E5-2673 v3 and the price for this including windows server licensing and support costs comes out at £62.20 per month

2018-11-20_13-59-06Azure Pricing calculator for A1_v2

For IONOS Enterprise Cloud I’ve also selected a similar spec and have used the Intel Haswell E5-2660 v3 based chip for the OS as this will be very close to the A1_v2 instance in Azure, Like Azure I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £50.96 so comparing costs of using IONOS Enterprise Cloud there would be a saving of £134.88 over the year, a saving is a saving, so on paper the costs look good so far.

2018-11-20_14-00-37IONOS Enterprise Cloud Pricing for A1_v2 equivalent

Now, what about performance tests between the two?  First I wanted to see how the external and internal internet connectivity was performing, so no big surprise, IONOS way outperformed Azure by a factor of 3, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects.

2018-11-22_10-07-47Azure Speedtest performance rating

2018-11-22_10-40-14IONOS Enterprise Cloud Speedtest performance rating

Next, the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at Azure first

2018-11-22_10-40-59Azure A1_v2 Instance Novabench Results

The Azure instance had a low score for its CPU benchmark which makes sense as the CPU is a shared resource with other instances being hosted on that Hyper-V cluster node within the Azure cloud, the RAM score was also low with a throughput of 3929 MB/s, but what was noticeable was that the disk read performance was good with a throughput of 163 MB/s but write speeds were a complete polar opposite.

The IONOS Enterprise cloud eclipsed the metrics of the Azure instance and really showed off the advantage of having dedicated CPU and memory resources for the instance

2018-11-20_12-37-19IONOS Instance Novabench result

The CPU performance was 385% that of the CPU in Azure and for Azure to achieve a similar score an additional 3 CPUs would have to be added to maintain the same CPU score. The RAM speed also was way beyond that of Azure and achieved 19318 MB/s a factor of 3 times faster, the disk read & write performance both outperformed Azure, it did maintain an equal throughput for both write and read speeds with writes outperforming by 18 times that of Azure. Just a note here that I used a standard HDD as the storage medium and could have used an SSD instead which would have increased the performance even more.

Finally, I configured another instance in IONOS Enterprise Cloud using an AMD Opteron 62xx 2.8Ghz processor to see it that could match the Intel-based Azure instance and for much of the benchmark scores it was comparable to the Azure instance, even better the cost of the instance was £31.52 a month giving a saving £368.16 over the year. It should be mentioned that IONOS Enterprise Clouds let you configure cores and storage at will in the most granular way possible: core by core and Gigabyte by Gigabyte.

2018-11-20_15-55-43IONOS AMD Instance Novabench result


For Azure to catch up to similar performance of that of IONOS Enterprise Cloud the Azure instance would need to be reconfigured to a A4_v2 size this is 4 times the resources of the IONOS Instance which would increase the monthly cost to £182.44 which would equate to £2210.64 for the year of which £1599.12 would be for the cost of an equal performance instance of that of the IONOS instance.

2018-11-22_10-12-03Azure A4_v2 Instance Novabench Results

Can you really justify that type of expense of spending an additional £1600 per year for the same performance? IONOS Enterprise Cloud employs KVM based virtualisation making extensive use of hardware virtualisation and maps the CPU power of a real core to a vCPU and provides dedicated memory so it is surely the way to go.

Get your free 30 day no obligation trial at

The Spectre and Meltdown situation


Many blog posts have been written about the two biggest security vulnerabilities discovered so far in 2018. In fact, we are talking about three different vulnerabilities:

  • CVE-2017-5715 (branch target injection)
  • CVE-2017-5753 (bounds check bypass)
  • CVE-2017-5754 (rogue data cache load)

CVE-2017-5715 and CVE-2017-5753 are known as “Spectre”, CVE-2017-5754 is known as “Meltdown”. If you want to read more about these vulnerabilities, please visit &

Multiple steps are necessary to be protected, and all necessary information are often repeated, but were distributed over several websites, vendor websites, articles, blog posts or security announcements.

How to protect yourself against these attacks

Two (apparent simple) steps are necessary to be protected against these vulnerabilities:

  1. Apply operating system updates
  2. Update the microcode (BIOS) of your server/ workstation/ laptop

If you use a hypervisor to virtualize guest operating systems, then you have to update your hypervisor as well. Just treat it like an ordinary operating system. Also if your using vendor created software appliances that may be based on OS distributions like CentOS then those need to be protected also.

Sounds pretty simple, but it’s not. I will focus on three vendors in this blog post:

  • Microsoft
  • Linux
  • VMware

Let’s start with Microsoft. Microsoft has published the security advisory ADV180002  on 01/03/2018.

Microsoft Windows (Client)

The necessary security updates are available for Windows 7 (SP1), Windows 8.1, and Windows 10. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):

  • An supported anti-virus application is installed
  • Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
  • A registry key was added manually


Windows 10 (1709) KB4056892
Windows 10 (1703) KB4056891
Windows 10 (1607) KB4056890
Windows 10 (1511) KB4056888
Windows 10 (initial) KB4056893
Windows 8.1 KB4056898
Windows 7 SP1 KB4056897

Please note, that you also need a microcode update! Reach out to your vendor.

Windows Server

The necessary security updates are available for Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016 and Windows Server Core (1709). The security updates are NOT available for Windows Server 2008 and Server 2012!. The January 2018 security updates are ONLY offered in one of theses cases (Source: Microsoft):

  • An supported anti-virus application is installed
  • Windows Defender Antivirus, System Center Endpoint Protection, or Microsoft Security Essentials is installed
  • A registry key was added manually


OS Update
Windows Server, version 1709 (Server Core Installation) KB4056892
Windows Server 2016 KB4056890
Windows Server 2012 R2 KB4056898
Windows Server 2008 R2 KB4056897

After applying the security update, you have to enable the protection mechanism. This is different to Windows Windows 7, 8.1 or 10! To enable the protection mechanism, you have to add three registry keys:

VMware has published two VMware Security Advisories (VMSA):

VMware Workstation Pro, Player, Fusion, Fusion Pro, and ESXi are affected by CVE-2017-5753 and CVE-2017-5715. VMware products seems to be not affected by CVE-2017-5754. On 09/01/2017, VMware has published VMSA-2018-0004, which also addresses CVE-2017-5715. Just to make this clear:

  • Hypervisor-Specific Remediation (documented in VMSA-2018-0002)
  • Hypervisor-Assisted Guest Remediation (documented in VMSA-2018-0004)

Before you apply any security updates, please make sure that you :

  • Deploy the updated version of vCenter listed in the table (only if vCenter is used).
  • Deploy the ESXi security updates listed in the table.
  • Ensure that your VMs are using Hardware Version 9 or higher. For best performance, Hardware Version 11 or higher is recommended.

For more information about Hardware versions, read VMware KB article 1010675.


OS Update
ESXi 6.5 ESXi650-201712101-SG
ESXi 6.0 ESXi600-201711101-SG
ESXi 5.5 ESXi550-201709101-SG


OS Update
ESXi 6.5 ESXi650-201801401-BG, and
ESXi 6.0 ESXi600-201801401-BG, and
ESXi 5.5 ESXi550-201801401-BG
vCenter 6.5 6.5 U1e
vCenter 6.0 6.0 U3d
vCenter 5.5 5.5 U3g

All you have to do is:

  • Update your vCenter to the latest update release, then
  • Update your ESXi hosts with all available security updates
  • Apply the necessary guest OS security updates and enable the protection (Windows Server)

Make sure that you also apply microcode updates from your server vendor!


Disaster Recovery as a Service: Ten steps to success

Disaster recovery is becoming top of mind for many CIOs. Understanding the success criteria to make the disaster recovery journey of your own organization smooth and successful is critical, but the path to getting there can be difficult.

Follow the ten key steps below, to guide you on the right path to success.

  1. Understand why disaster recovery is important to your business, and what your specific disaster recovery requirements are.

The first key step is understanding why you are looking for a disaster recovery solution for your business, and what your requirements are- from a disaster recovery perspective as well as for the solution in need. Running a Business Impact Analysis (BIA) will assist in the impact of a disruption to your business and will also help expose the effect of such disruption to your reputation including the effect of any loss of data or loss of staff, the BIA is very much the building block and foundation of your disaster recovery planning and knowing what the business impact to outages is probably the most important aspect in defining the answer to the “why” question. Knowing the business impact will not only drive the Service Level Agreements (SLAs) for the business processes they will also assist the disaster recovery plan to minimise any prolonged outages which could be derived from human error during the recovery process. If these aspects are missing and haven’t been thought of yet then running a Business Impact Analysis should be the first thing that you do and will put you in good stead as you move forward.

An additional aspect of the disaster recovery process is to understand your Recovery Point Objective (RPO), Recovery Time Objective (RTO). From a SLA perspective, think about the amount of time and data loss your business can incur. Zero data loss is obviously ideal, but this can exponentially drive up the cost of the solution. Having a limit to the data loss that can be incurred by your business based on the business service is realistic. Both the time and data loss windows will translate to your RTO and RPOs respectively.

Additionally, does your business require adherence to any regulatory compliance or operating rules? For example, do you need to provide proof of a quarterly or yearly disaster recovery test?  Disaster recovery testing is important, and there are a lot of factors to take into consideration here. What kind of replication technology would you choose – expensive hardware-based replication or host-based or even replication to the cloud. What you choose is based on various factors including cost, business policies, SLA requirements, and importantly environmental factors.  For instance, if your data center is located in an area which gets affected by floods, then your disaster recovery location needs to be in a separate geographic area or even in the cloud.

  1. Should you build your own or buy off the shelf?

The next step is driven by how much investment you either want to make operationally or in capital expenditure. You probably have already invested quite heavily into infrastructure at your primary data center location – things such as server hardware, virtualization technologies and storage. You could take a simple approach and invest in another physical data center for disaster recovery, but this would lead to the expense of not only double software / hardware infrastructure costs but also additional physical location costs. A more savvy approach would be to utilize a vendor to supply disaster recovery services at a fraction of the cost of running dual locations. Keep in mind that choosing the right vendor is important too. You will want to look for a leader in the managed disaster recovery services space that has years of credible experience.

  1. Understand the difference between disaster recovery as-a-service and backup and recovery as-a-service.

Understand that disaster recovery and backup are different ball games. While backup is a necessary part of a business continuity strategy, it lends itself to SLAs of hours to days. On the other hand, disaster recovery is better suited to SLA requirements in minutes to hours. Based on the business uptime and data loss requirements specific to a business service, your business would deploy a disaster recovery solution for your business-critical applications, while backup would be sufficient for those non-critical business services which can take some downtime.  Choose a disaster recovery as-a-service solution that can protect your entire estate or at least the critical elements of it that drive your business. This includes physical and virtual systems, as well as the mix of different OSs that typically are run within enterprise businesses today. The disaster recovery as-a-service solution that you choose should also be able to provide you with the ability to run your systems within their cloud location for a period of time, until you can get your infrastructure back up and running and transfer services back to your primary site.

  1. Choose the right Cloud Hypervisor.

It may seem like an easy decision to make- you would seek a vendor that runs the same hypervisor on the backend as you are on your primary site, but keep in mind this is not a necessity.  If you are using VMware vSphere or Microsoft Hyper-V then running these type of hypervisors in the cloud is going to incur some additional licensing costs in a DR solution. Another thing to think about is whether you really need all the bells and whistles when you’ve invoked disaster recovery. Most of your time is going to be taken up with getting services up and running back at your own location as quickly as possible, so maybe not. What you basically need is a hypervisor to host your systems that provides the basic performance, scale and resilience you require. A more cost-efficient stance would be to utilise a KVM-based hypervisor running within OpenStack. This ticks the boxes in terms of enterprise ready and best of all, the service costs should yield a better ROI than those running proprietary hypervisor technologies, saving your business considerable money.

  1. Plan for all business services that need to be protected, including multi-tier services

Now were getting down to the nitty-gritty details. The business services that need to be protected will be primarily driven by the SLAs that brought you down this path. Keep in mind that you capture all operating system types that these business services are running on and also think about how you handle any physical systems that have not yet been virtualized. Moving virtualized applications to the cloud is an easy process, as these are already encapsulated by the hypervisor in use. But pure physical business applications are another matter altogether.  It is not impossible to move physical application data to the cloud, but when it comes to a failback scenario, if the services you select does not have this capability, then you are a sitting duck. This is especially important to keep in mind in the case where a complete outage has occurred and a rebuild is needed. Another thing to think about is when your business services or applications are started in the cloud- can you start or stop these systems in a particular order if a business service is made of different processes, such as a multi-tier application, and also inject manual steps within your failover plan if so required? Controlling multi-tier business applications that span across systems is going to be a high priority, not only while invoking disaster recovery but also when you’re performing a disaster recovery test.

  1. Plan for your RTOs, RPOs, Bandwidth, Latency and IOPs

Understanding how you can achieve your Recovery Point Objective (RPO), Recovery Time Objective (RTO), as well as the IO load of virtual machines, and the peaky nature of writes through the business day within your systems, this data will help you understand what your required WAN bandwidth should be. Determine whether your disaster recovery service vendor can guarantee these RTOs and RPOs, because every additional minute or hour that your business is down as defined by the Business Impact Analysis is going to cost you. If you aim for RPO of 15 minutes or less, then your bandwidth to the cloud needs to be big enough to cope with extended periods of heavy IO within your systems. If your RTO is something like 4 hours, then you need to know if your systems can recover within that time period, keeping in mind that other operations too need to be managed, such as DNS and AD/LDAP updates including any additional infrastructure services that your business needs.

  1. Avoid vendor lock-in while moving data to the cloud

Understanding how your data will be sent to the cloud provider site is important. A solution that employs VMware vSphere on-premises and in the cloud limits you to a replication solution that works only for virtualized systems with no choice of protecting physical OS systems. This may seem acceptable at the time, but you will be locked into this solution and switching DR providers in the future may be difficult.  Seeking a solution that is flexible and can protect all types of major virtualization platforms as well as physical OS gives you the flexibility of choice for the future.

  1. Run successful disaster recovery rehearsals without unexpected costs

Rehearsals or exercises are probably the most important aspect of any disaster recovery solution. Not having an automated disaster recovery rehearsal process that you test on a regular basis can leave your business vulnerable. Your recovery rehearsals should not affect your running production environment. Any rehearsal system should run in parallel albeit within a separate network VLAN, but still have some type of access to infrastructure services such as AD, LDAP and DNS etc. so that full disaster recovery testing can be carried out. Once testing is complete, it is essential that the solution include a provision to easily remove and clean up the rehearsal processes.

  1. How long can you stay in the cloud?

For a moment let’s imagine that the unthinkable has happened, and you have invoked disaster recovery to your cloud service provider. The nature of the outage at your primary location will dictate the length of time you will need to keep your business applications running on your service providers’ infrastructure. It is imperative that you are aware of any clauses within your contract that pertain to length of time you can keep your business running on the cloud providers’ site. There is also a big pull to get enterprises to think about running in the cloud and staying there, but this is a big decision to make. Performance of the systems is going to be one metric to poll against, as is performance of storage, or more precisely the quality of service of the storage that the cloud vendor will provide. On the whole, it makes sense to get back into your own infrastructure as quick as possible, since it is custom built to support your business.

  1. How easy is it to failback business services to your own site?

Getting your data back or reversing the replication data path is going to be important especially as you don’t want to affect your running systems within the cloud by injecting more downtime! Rebuilding your infrastructure is one aspect that needs to be meticulously planned. Any assistance that the solution itself can provide to make this process smoother is a bonus. Your on-premises location is going to need a full re-sync of data from the cloud location which may take some time, so the solution should be able to handle a two-step approach to failback- the re-sync should happen in one operation and once complete, the process to switch back your systems can be done at a time that suits your business.

Success, you’re now armed to create a robust business continuity plan.

Follow the steps above to gain an understanding of what’s needed to be successful on your disaster recovery as a service journey, and use them as checkpoints while developing you own robust business continuity plan for your business.

Providing high availability and disaster recovery for virtualized SAP within VMware the right way

Over the past couple of years I have been getting more and more involved in SAP architecture designs for HA and DR and one on my pet hates at the start of my journey was the lack of basic information on what the SAP components were for and how they interacted with each other, it was a hard slog, for those who are venturing into SAP or even those hardened SAP veterans out there the paper below covers SAP in great detail and more importantly covers how SAP deployments should be done correctly especially when high availability and disaster recovery is a requirement.

Many organizations rely on SAP applications to support vital business processes. Any disruption of these services translates directly into bottom-line losses. As organization’s information systems become increasingly integrated and interdependent, the potential impact of failures and outages grows to enormous proportions.

The challenge for IT organizations is to maintain continuous SAP application availability in a complex, interconnected, and heterogeneous  application environment. The difficulties are significant:

  • there are many potential points of failure or disruption
  • the interdependencies between components complicates administration
  • the infrastructure itself undergoes constant change

To gain additional competitive advantage, enterprises must now work more closely together and integrate their SAP environment with those of other organizations, such as partners, customers, or suppliers. The availability of these applications is therefore essential.

There are three main availability classes, depending on the degree of availability required:

  • Standard Availability – achievable availability without additional measures
  • High Availability – increased availability after elimination of single points of failure within the local datacenter
  • Disaster Recovery – highest availability, which even overcomes the failure of an entire production site

Symantec helps the organizations that rely on SAP applications with an integrated, out-of-the-box solution for SAP availability. Symantec’s High Availability and Disaster Recovery solutions for SAP enhance both local and global availability for business critical SAP applications.

Local high availability: By clustering critical application components with application-specific monitoring and failover, Symantec’s solutions simplify the management of complex environments. Administrators can manually move services for preventative and proactive maintenance, and the software automatically migrates and restarts applications in case of failures.

Global availability/disaster recovery: By replicating data across geographically dispersed data centers and using global failover capabilities, companies can provide access to essential services in the event of major site disruptions. Using Symantec’s solutions, administrators can migrate applications or an entire data center within minutes, with a single click through a central console. Symantec’s flexible, hardware independent solutions support a variety of cost-effective strategies for leveraging your investment in disaster recovery resources.

Symantec provides High Availability and Disaster Recovery solutions for SAP, utilizing Symantec™ Storage Foundation, powered by Veritas, Symantec™ Replicator Option, Symantec™ Cluster Server, powered by Veritas, and Cluster Server agents that are designed specifically for SAP applications. The result is an out-of-the-box solution that you can quickly deploy to protect critical SAP applications immediately from either planned or unplanned downtime.

Download the full white paper below.


Using Hyper-V? What to control your app tiers intelligently, SQL, Oracle, Sharepoint, IIS no problem…

So I find myself back in Vegas once again, It’s Symantec Vision Conference which this year is set in Caesars Palace, it’s going to be a great event with some big announcements around Hyper-V and Azure. I’ve lost count now the number of times I been to this very mad place but it’s probably the only place that can cope with thousands of people for conventions of tech companies so it’s very difficult to avoid.

If you’re using Hyper-V or just testing it in your environment then I have some great news, ApplicationHA 6.1 is now available for Windows 2012 and 2012 R2 and once the Hyper-V role is enabled it can be used to monitor and control your virtualized application workloads.

Over for the past months I have been working indepth with Hyper-V and ApplicationHA especially as we ran up towards the general availability of ApplicationHA 6.1 and Symantec Cluster Server 6.1 way back at the start of April. The new release also sees updates for support for newer versions of VMware vSphere and focus on guest support of Windows 2012 R2 along with updated corresponding Microsoft application support too. Importantly though Microsoft Windows with Hyper-V has also been added to the support platform and most of my time has been working with engineering and driving the beta for the release. The last few months have kept me busy building hands on labs and breakout sessions for Symantec annual Vision conference and prep for Microsoft Teched later this month too.

Hyper-V seems to be gaining traction over the past year or so now and conversations that I have had with some of our enterprise customers confirm that many of those are evaluating Hyper-V. What seems to be constant is that many are reviewing their options for their hypervisor platforms and a majority are testing Hyper-V and kicking the tyres so to speak to see what it can offer their business. I know that there are also a fair many that are starting or who have now deployed it into production. Of course VMware are still the major player in this space but Microsoft seem to be doing right things to drive in their direction.

ApplicationHA for Hyper-V leverages the Microsoft Failover Cluster Heartbeat service which Microsoft added to Windows 2012, ApplicationHA leverage’s this heartbeat function to communicate with Failover Cluster and informs that a heartbeat fault has occurred if ApplicationHA is unable to restart the application within the virtual machine. ApplicationHA will attempt to remediate the fault a number of times before it communicates with the heartbeat service. The diagram below gives an idea on the flow.


  1. Microsoft Failover Cluster detects issues with virtual machines if faults occur and moves the effected VM.

  2. ApplicationHA detects issues with the application under control and attempts to restart the faulted application.

  3. In the event that ApplicationHA is unable to start the application it instructs a heartbeat fault with Failover Cluster.

  4. Failover Cluster reboots the VM or moves the VM to another host if the application still has issues starting.

If you want to review this capability in more detail then I have posted a number of videos on Symantec Connect which walks through the installation and configuration from start to finish. The last video also demonstrates the Virtual Business Service feature within Veritas Operations Manager which essentially connects applications together into their relevant tier and provides the ability to control those and also remediate faults within the stack intelligently, definitely check it out when you get time as this feature is not just for ApplicationHA, it has the ability to be used with Symantec Cluster Server and now adds support for Microsoft Failover Cluster within the tiers too.

Out of sync VM with DC due to snapshot revert – Fix Security database trust relationship issues

Ever get that feeling when you’re working on snapshot VMs and you mistakenly revert the wrong VM and now it cannot log into the domain due to out of sync security database between the VM and the Domain Controller. follow these quick steps to fix a “The Security Database on the Server Does Not Have a Computer Account for This Workstation Trust Relationship”

  1. Disconnect the virtual network adapter on the virtual machine that is having connection issues.
  2. Login on to the Server/PC that is inaccessible with an account that has Administrator privileges.
  3. Connect the server virtual network adapter back in while logged on.
  4. Change the domain name from FQDN (ie:- windom.local ) to the short name (ie:- windom).
  5. Reboot the virtual machine and log back in as the domain user and all should be fine.

Workstation 10 rolls out of the door from VMware

VMware pushes out a major update for Workstation ahead of the highly anticipated vCloud Suite 5.5 release.

VMware Workstation has got to be one of my most used applications and probably one of my favorite too especially as I use this on a daily basis to demonstrate solutions to customers and use it heavily for creating training labs for internal folk.

The download is available at

So what’s New?

VMware Workstation 10 delivers best-in-class Windows 8 support, and innovative new features that transform the way technical professionals work with virtual machines, whether they reside on their PCs or on private enterprise clouds.

  • New Operating System Support
    Support has been added for:

    • Windows 8.1
    • Windows 8.1 Enterprise
    • Windows Server 2012 R2
    • Ubuntu 13.10

    As well as for the latest Fedora, CentOS, Red Hat and OpenSUSE releases.

  • VMware Hardware Version 10
    This version of VMware Workstation includes VMware hardware Version 10 and is compatible with vSphere 5.5. Hardware versions introduce new virtual hardware functionality and new features while enabling VMware to run legacy operating systems in our virtual machines. New features included in this hardware version:

    • 16 vCPUs
      Virtual machines can now run with up to 16 virtual CPUs. This enables very processor intensive applications to be run in a virtual machine.Note: Running virtual machines with 16 vCPUs requires that both your host and guest operating system support 16 logical processors. Your physical machine must have at least 8 cores with hyper-threading enabled to power on a virtual machine with this configuration.
    • 8 Terabyte Disks
      Virtual machines can now include virtual disks greater than 2 Terabytes. Given the limitations of most operating systems to boot from disks greater than 2 Terabytes, These large disks are most useful as secondary drives for file storage.Note: To use a disk greater than 2TB for a boot disk, your guest operating system would need to boot using EFI in order to read a GPT formatted disk which is required to access all of the sectors on a disk of this size. Additionally, the Buslogic controller is not capable of supporting a disk greater than 2TB.
    • Virtual SATA Disk Controller
      A SATA I/O controller can now be selected during the creation of a custom virtual machine in addition to an IDE and SCSI controller. This enables use of in-box SATA drivers that are shipped with operation systems.
    • USB Improvements
      USB 3 Streams have been implemented to enable high speed transfer of files from USB 3 external storage devices that support this technology. For customers running Workstation 10 on laptops with small hard disks, large data files, video files etc., can be stored on an external USB 3 storage device and accessed quickly from within the virtual machine.VMware has also addressed issues Intel, NEC, AMD, TI and Linux Kernel host xHCI drivers to improve overall USB 3 compatibility and performance.
    • More VMnets
      Due to demand, VMware has doubled the number of VMnets in Workstation 10 to twenty! This provides you with more virtual networks to dedicate to specific uses, and it enables more complex networked virtual environments to be built.
    • SSD Pass-through
      Windows 8 is capable of detecting when it is being run from a solid state drive (SSD) and optimizes itself for this hardware. In Workstation 10, the guest operating system will be able to detect when the virtual machine Disk file is being stored on an SSD drive and the operating system can make the same optimizations when it is running in a virtual machine.

    Many additional changes have been made to this Hardware Version including some performance improvements, power savings, and compatibility with new processors. We have also made significant improvements in the startup time of VMware Workstation and in Windows boot time when running Windows virtual machines.

  • Expiring Virtual Machines
    VMware has enhanced the capabilities of Restricted Virtual Machines to include the ability to expire a virtual machine on a specified date and time. This feature enables our customers to create virtual machines to be shared with employees, students, customers, contractors, etc. The restricted virtual machine will run until their contract terminates, demo runs out, or course ends.The expiring capability establishes a secure connection to a web server to validate the current date and time and prevent users from rolling back their system clock to circumvent the logic. The ability to set the synchronization frequency has been added to allow customers to balance the need for timely expiration and the load on their network. Expiring virtual machines also include the ability to display a custom message for virtual machines about to expire and after a virtual machine has expired. Finally, a lease period can be defined to allow users to run offline for plane trips and remote work.
  • Virtual Tablet Sensors
    Workstation runs very well on the new tablet and convertible PCs. Last year VMware enabled touch screen input to be passed through to the virtual machine. Workstation 10 introduces a virtual Accelerometer, Gyroscope, Compass and Ambient Light sensor.Customers who run Workstation 10 on a Windows 8 tablet and install Windows 8 in a VM, will be able to shake, twirl, tilt, and spin their tablet and sensor aware applications running in a virtual machine will respond accordingly.
  • User Interface Enhancements
    There are many user interface improvements that we have included in the Workstation 10 release. The highlights include:

    • Windows 8 Unity Mode Support
      We are continuing to improve how the Workstation Unity user-interface works with Microsoft’s “Modern UI” or the “Microsoft Design Language” (The new tile interface in Windows 8 formerly known as Metro). Microsoft Store applications are now available in the Unity menu and can be launched directly from it.
    • Multiple Monitor Navigation
      When running with 2, 3, 4 or more monitors it has been frustrating to use the full screen mode in Workstation and toggle through each combination of monitors to get to the one you want. The full screen toolbar now has an option to choose your configuration from a menu and jump to it immediately.
    • Power Off Suspended Virtual Machines
      Workstation 10 lets you simply power off a suspended Virtual Machine in order to make changes to the configuration without powering it on and then off first. Powering off a suspended virtual machine will lose any information stored in memory, but will not lose anything saved to the virtual disk.
    • Remote Hardware Upgrade
      When working with virtual machines running remotely on vSphere or on another instance of Workstation, you can now remotely upgrade the virtual hardware version.
    • Localized into Simplified Chinese
      The Workstation user interface and online help has been translated into Simplified Chinese.
  • New Converter
    This release includes the latest version of the VMware Standalone Converter. The Converter enables users to turn a physical machine into a virtual machine. This version of the Converter includes the ability to convert machines running Windows 8, Windows Server 2012, and RHEL 6 operating systems. It supports virtual and physical machines with Unified Extensible Firmware Interfaces (UEFI) and EXT4 file systems as well as GUID Partition Table (GPT) disks.
  • OVFTool
    The Open Virtual Machine Format (OVF) is a virtual machine distribution format that supports sharing virtual machines between products and organizations. The VMware OVF Tool is a command-line utility that enables a user to import and export OVF packages to and from a wide variety of VMware products. The latest release of the OVFTool is included with VMware Workstation 10 and is used to upload and download virtual machines to and from vSphere. The OVFTool is also used to import an .OVF file which may come in handy when importing virtual machines created using desktop virtualization software developed by Oracle.
  • VMRun Enhancements
    The VMRun command line utility has been enhanced with two new options getGuestIPAddress and checkToolsState to retrieve the IP address of the gust operating system and determine the state of VMware Tools in a guest
  • Embedded 30-day Trial
    Workstation 10 can now be evaluated for 30-days by simply entering your email address the first time you run the application. This change is intended to make it much easier for our customers to learn about the latest release of VMware Workstation without their license keys being trapped by spam filters.
  • VMware KVM
    Many of VMware customers have asked for a way to run a virtual machine so that their users do not realize they are running in a virtual machine. VMware Workstation 10 includes a new executable (on Windows only for now) called VMware KVM. Run vmware-kvm.exe vmx-file.vmx from the command line and your virtual machine will launch in full screen with no toolbar or any other indicator that you are running a VM. You can use Ctrl-Alt to ungrab from the virtual machine and the Pause/Break key to toggle between multiple virtual machines running under VMware KVM, or between a virtual machine and the host system. The user experience should be just like that of using a KVM switch – hence the name.If you simply type vmware-kvm.exe from the command line you will get some options that can be used in this format: vmware-kvm.exe [OPTIONS] vmx-file.vmx. If you runvmware-kvm.exe --preferences you will presented with an interface that allows you to configure certain behaviors such as the key used to cycle between virtual machines.This is the latest generation of an executable previously called VMware-fullscreen.exe that previously shipped with Workstation 8 with a major upgrade in display handling.
  • WSX 1.1
    Try out the latest version of WSX which can be found on the VMware Communities page at:

Windows 2012 R2 Preview is here

Microsoft has posted the Windows Server 2012 R2 preview online ahead of its annual BUILD conference.

Windows Server 2012 R2 Preview Now Available for Download

Windows Server 2012 R2 comes in several different editions, namely Essentials, Developers, Windows Azure and IT Professionals. The preview versions will be active till January 15, 2014, after which users will need to buy a retail license.

Windows Server 2012 R2 preview features the Start button as well as context menu settings for shutting down and restarting the operating system. 

You can try out the latest Windows Server 2012 R2 edition by hitting the download link below.

Windows Azure Evaluation

Windows Azure Pack

Microsoft Windows 8.1 Preview is here

If you’re ready to bring back the Start button, you can head over to Microsoft’s website to download the Windows 8.1 Preview right now. The Preview release is open to all users, though any machine that isn’t running Windows 7 or later will have to perform a fresh install to get it working. For those already on Windows 8, Microsoft is offering the 8.1 Preview directly through the Windows Store. To access it, users have to first visit Microsoft’s website to initiate a small update to Windows itself. Afterward, their device will restart and direct them to a download of the 8.1 Preview update.

vMotion Compliant in guest clustering with Symantec Storage Foundation HA 6.0.x

Today saw the release of Symantec Storage FoundationHA for Windows 6.0.1 , and Storage FoundationHA 6.0.2, which includes additional functionality that enables Windows and Linux within a VMware virtual machine environment. In this era of virtualization one of the main challenges is to achieve 100% virtualization for your existing and new deployments. As customers travel towards this goal they find that providing resilience for key business applications can be difficult to attain and can find resistance from their application owners who are used to the traditional way that resilience is provided with clustering on physical hardware. While it is possible to use clustering inside a virtual machine, doing so can prohibit the use of VMware technology that provides virtual machine transportability between physical ESX hosts, in particular vMotion, Dynamic Resource Scheduler (DRS) and snapshots. Tied to this there is also a requirement that direct access to storage is required via the use of Raw Device Mappings (RDM) which means that VMware admins need to get their storage counterparts to provision storage upfront, doing this means that the VMware admin needs to know how much storage the Application owner needs and typically changing this after the fact can be complicated and disruptive.

So what’s needed is an intelligent way that storage is controlled and accessed between virtual machines and this process needs to be transparent to VMware technologies such as vMotion, DRS and VMware HA. Symantec has been working to provide this functionality and enhance the capabilities of Veritas Cluster Server. Today this functionality is available to preview and test in your environment. Additionally the creation and configuration of the application cluster is simplified and enhanced with VMware in mind. In a matter of five simple steps a cluster is created and the configuration of the application is dynamically discovered to make the whole experience a painless one.

In addition to this our focus has been to leverage the management of VMware environments via the use of vSphere Client to manage various aspects of the application such as start, stop and switchover from one virtual machine to another. The access control is an extension to vCenter administration roles and access can be customized based on your virtual infrastructure.  A pluggable architecture facilitates the use of a browser to access the User Interface if vSphere Client cannot be used in your environment.

Key Application Failover Improvements for VMware in both Windows SFWHA 6.0.1 and Linux SFHA 6.0.2

1)     Same Console Server support for both Veritas Cluster Server (VCS) and ApplicationHA – Users will now be able to manage their virtual environments using ApplicationHA and Veritas Cluster Server using a single vCenter pluggable Symantec High Availability Console.

2)     VCS Storage Support/Integration (in guest, platform) : Provide a way to enable vMotion/DRS in VCS clusters configured with shared storage and deployed on virtual machines in VMware environment.

3)     Application Monitoring and Failover Target Configuration workflow : Using wizard workflow  enable users to Configure application monitoring, Un-configure application monitoring,  Add failover target (to a service group/application), Remove failover target (from a service group/application)

4)     Visibility and Control on using the VCS Cluster View on vSphere Client and the Symantec High Availability Console Dashboard on vSphere Client to show ApplicationHA and VCS application states and overview. The dashboard will provide an overview of the entire data center / cluster from a VCS / AppHA perspective. Users will see a consolidated list of applications running on all the virtual machines

Key Application Failover Improvements for VMware in Windows WxRT 6.0.1

1)     New Application Config Wizard Support for Custom Applications, SAP & Microsoft SQL Server: Users should be able to configure 1 or more SQL nodes for failover in a VMware environment. A documented set of steps will  have to be followed for installing SQL itself before VCS can be configured on these nodes. The User Interface will configure the cluster, SQL service group and also set appropriate values for restarting of the faulted node. It will also configure newer agents required to support a VMware environment

Key Application Failover Improvements for VMware in Linux LxRT 6.0.2

On Linux, we have a release for SFHA 6.0.1.  With the 6.0.2 release we are extending Application Failover between VMware virtual machines.  This  release includes changes in agents to support vMotion and the use of VMDK disks.  This release will support Storage FoundationHA, but no changes will be implemented in Storage Foundation.  Storage Foundation is included to allow customers to upgrade from previous versions.  This product inclusion means that you can upgrade SFHA or VCS from 5.1, 5.1SP1 and 6.0 directly to the latest release.  Additional products, such as Storage Foundation for Oracle RAC and Storage Foundation Cluster File System, are not included in this release as having multiple systems access VMDK disks is not possible with the current agent.

1)     New Application Config Wizard Support for Oracle Databases,  SAP, WebSphere MQ configurations and Generic Applications.  Users should be able to configure multiple nodes for failover in a VMware environment.

  1.         Using Oracle, SAP and Websphere, each node in the environment will  have to be installed with the application binaries and have the application running before VCS can inspect and configure these nodes. The User Interface will configure the cluster, a service group for the specific application and also set appropriate values for restarting of on the faulted node. It will also configure additional agents required to support a VMware environment.  The wizard will configure disks, the application and prompt users for virtual IP addresses to complete the configuration as well as push out the VCS binaries to all nodes in the cluster.
  2.        For applications traditionally configured using the application agent, there is a wizard that walks through the setup of the Generic Application including mount points and virtual IP resources along with cluster setup.

Go grab yourself a trail version download directly from and give it a spin today.