This is a Bitnami WP Multisite installation. Please check the multisite configuration guide before adding content to your sites. To learn more about Bitnami stacks visit our website and to get support visit our forums. Enjoy!
Category: Uncategorized

Ever been sitting in a hotel (or your home) and tried to watch your favorite sports team on a streaming service, but couldn’t due to the archaic and draconian broadcast licensing agreements which ban in-market online distribution?!? And the name brand hotel chain doesn’t carry the local affiliate for your pennant-chasing team?!? WTF!?!
Or, you just wanted to connect to any website through an inexpensive (possibly even FREE!) proxy?
Not to worry! Within only a few minutes you too can start up an EC2 instance in any region of the world and route your browser though it (via PuTTY) to make the three letter league “think” you aren’t in the home town market – don’t worry, you’ll still get all the streaming ads, and you’re paying for the streaming service, so you aren’t cheating the system completely, just flipping the bird at the old-school traditional broadcast monopoly!!!
Overview:
- Launch Ubuntu 18.04 micro or nano instance in the region of your choosing.
- Configure PuTTY and connect to instance – this is how you “proxy” through the EC2 instance.
- Configure browser proxy settings to connect through a running PuTTY session, then surf the web!
First, you need an AWS account. If you don’t have one yet, you can easily get one. And to boot, you get many services (including running this proxy server) free for up to one full year! And some services even beyond that.
Launch Proxy Instance
In the AWS Console select the EC2 service, then Launch Instance. I’m launching a micro (qualifies for free tier) Ubuntu 18.04 instance. If you’d like you can launch another version or distro. Also, the AMI ID will be different if you choose a different region, and it may be different if you launch some time after this post. The key is to just launch a current version.
Launch the instance into either the default VPN or a custom VPN in a public subnet – this instance needs to have a public IP address. You can choose either an existing security group, or create a new one – make sure to add TCP port 22 from your current location. For the key pair either use an existing one in your account to which you have access, or create a new one. You’ll need this to SSH to the instance.
Make note of the instances IP address as you’ll need it shortly…
SSH to EC2 Proxy Instance
If you don’t already have PuTTY and PuTTYGen, or don’t know how to convert the AWS-provided .pem file to a .ppk file (required for PuTTY then click here to get these apps, gen the .ppk file and configure PuTTY.
In addition to the previous connection information navigate to Connection, SSH, Tunnels and add “2100” in the Source port field, then click Add. Save the configuration, then connect to the instance.
Configure Browser Proxy Settings, then Browse the Web
Firefox
Go to Firefox settings, or enter “about:preferences#general” in the address field, scroll to the bottom and click on Settings under Network Settings. Select Manual proxy configuration and enter “localhost” in the SOCKS Host field, and port “2001” in the port field. Save the settings, then connect to the web.
Chrome (Works for IE and Edge too)
Select settings and search for “proxy,” then go to the network settings.
Click on LAN Settings.
Select “Use a proxy….,” then Advanced.
Enter “localhost” and port “2100” in the SOCKS section, then save your settings.
Verify Proxy is Working
Browse to speedtest.net or another service and verify that the IP address displayed matches the public IP address of the EC2 Proxy Instance. Notice how the “client” IP address matches that of the EC2 Proxy from above.
Make sure to leave the PuTTY session open as long as you are surfing the web, or watching your streaming event….
When you’re done with the proxy just terminate it or shut it down. If you terminate the instance you may want to create an AMI from which you can quickly launch an instance in the future. Also, make sure to undo the proxy settings in your browser. Happy watching/browsing!
NOTE: This is an update to the kick-ass post, “How To Create Your Own Private Proxy Using Amazon EC2 and Putty on Windows” Thanks Nimrod!
A Few of My Favorite Online Tools

IP Calculator
ipcalc takes an IP address and netmask and calculates the resulting broadcast, network, Cisco wildcard mask, and host range. By giving a second netmask, you can design subnets and supernets. It is also intended to be a teaching tool and presents the subnetting results as easy-to-understand binary values.
![]() |
IP Calculator Results. |
DNS Propagation Checkers
whatsmydns.net lets you instantly perform a DNS lookup to check a domain names current IP address and DNS record information against multiple name servers located in different parts of the world. This allows you to check the current state of DNS propagation after having made changes to your domains records.
dnschecker.org is a similar tool.
![]() |
DNS Checker Results. |
Test Website Performance
Use webpagetest.org to run a free website speed test from multiple locations around the globe using real browsers (IE and Chrome) and at real consumer connection speeds. You can run simple tests or perform advanced testing including multi-step transactions, video capture, content blocking and much more. Your results will provide rich diagnostic information including resource loading waterfall charts, Page Speed optimization checks and suggestions for improvements.
![]() |
Website Performance Test Results. |
As I posted recently I’ve been using some great resources to prepare for some AWS exams. In my AWS Certified Solutions Architect Official Study Guide: Associate Exam review I raved about how good it is in assisting to prepare for the exam. Now I want to talk about the, “AWS Certified SysOps Administrator Official Study Guide: Associate Exam,” which is what you need to help pass this exam.
In it, “The AWS Certified SysOps Administrator Official Study Guide: Associate Exam is a comprehensive exam preparation resource. This book bridges the gap between exam preparation and real-world readiness, covering exam objectives while guiding you through hands-on exercises based on situations you’ll likely encounter as an AWS Certified SysOps Administrator. From deployment, management, and operations to migration, data flow, cost control, and beyond, this guide will help you internalize the processes and best practices associated with AWS. The Sybex interactive online study environment gives you access to invaluable preparation aids, including an assessment test that helps you focus your study on areas most in need of review, and chapter tests to help you gauge your mastery of the material. Electronic flashcards make it easy to study anytime, anywhere, and a bonus practice exam gives you a sneak preview so you know what to expect on exam day.”
I would highly recommend getting this study guide and using it to help you achieve the, “AWS Certified SysOps Administrator – Associate” certification.
I’ve been working with AWS for 10 years now and love it more than ever! About four years ago when I first learned of AWS Certifications I went right out and took the Architecting on AWS Associates exam and was able to pass it the first time. When it came time to re-certify a couple years later I went in a little unprepared and barely missed passing so I started looking for some help.
That’s when I found, “AWS Certified Solutions Architect Official Study Guide: Associate Exam.” I got it and read through for the next few weeks brushing up on many topics I already knew, but also learning a lot of new material.
In this the authors do a great job of explaining the following topics and it will definitely help prepare you for the exam!
- Mapping Multi-Tier Architectures to AWS Services, such as web/app servers, firewalls, caches and load balancers
- Understanding managed RDBMS through AWS RDS (MySQL, Oracle, SQL Server, Postgres, Aurora)
- Understanding Loose Coupling and Stateless Systems
- Comparing Different Consistency Models in AWS Services
- Understanding how AWS CloudFront can make your application more cost efficient, faster and secure
- Implementing Route tables, Access Control Lists, Firewalls, NAT, and DNS
- Applying AWS Security Features along with traditional Information and Application Security
- Using Compute, Networking, Storage, and Database AWS services
- Architecting Large Scale Distributed Systems
- Understanding of Elasticity and Scalability Concepts
- Understanding of Network Technologies Relating to AWS
- Deploying and Managing Services with tools such as CloudFormation, OpsWorks and Elastic Beanstalk.

It’s common knowledge that TLS is preferred over SSL because it provides better security, and because an industry-wide push to stop using SSL, use HTTPS exclusively (instead of HTTP), and increase security overall has been underway for a while. But it’s also important to use the latest version of TLS. Fortunately Windows Server 2012 R2 supports all three current versions of TLS, 1.0, 1.1 and 1.2. But, what if your server requires the disabling of lower versions, like 1.0 or even 1.1? Sure, there are various resources on the Internet from .reg files to both paid and free utilities, but since I often work in environments restricting the use of such methods, and since I like to use the simplest native method possible I have a set of commands I run in PowerShell to both disable TLS 1.0 and 1.1, and explicitly create keys and enable TLS 1.2 (which aren’t already in the registry for some reason).
Note: although this was written specifically for Server 2012 R2 these commands work on Server 2008 R2 as well.
After installing the latest version of PowerShell on new servers one of the next things I do is run the set of commands below. First though, we’ll take a look at the current security (SCHANNEL) protocols on a new 2012 R2 server with:
Get-ChildItem -Path HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELProtocols -Recurse
![]() |
View SCHANNEL Registry settings in PowerShell on Server 2012 R2. |
Here is the set of commands I run to disable TLS 1.0 and 1.1 and explicitly enable TLS 1.2 on Windows Server 2012 R2:
#2012 R2 - Disable TLS 1.0, 1.1, enable 1.2
$TLSProto = "TLS 1.0" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS" –Name "TLS 1.0" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name CLIENT New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name SERVER New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoCLIENT" –Name Enabled –Value 0 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoSERVER" –Name Enabled –Value 0 –Type DWORD
$TLSProto = "TLS 1.1" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS" –Name "$TLSProto" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name CLIENT New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name SERVER New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoCLIENT" –Name Enabled –Value 0 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoCLIENT" –Name DisabledByDefault –Value 1 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoSERVER" –Name Enabled –Value 0 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoSERVER" –Name DisabledByDefault –Value 1 –Type DWORD
$TLSProto = "TLS 1.2" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS" –Name "$TLSProto" New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name CLIENT New-Item
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProto" –Name SERVER New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoCLIENT" –Name Enabled –Value 1 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoCLIENT" –Name DisabledByDefault –Value 0 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoSERVER" –Name Enabled –Value 1 –Type DWORD New-ItemProperty
"HKLM:SystemCurrentControlSetControlSecurityProvidersSCHANNELPROTOCOLS$TLSProtoSERVER" –Name DisabledByDefault –Value 0 –Type DWORD
Shrinking EBS Windows Boot Volume

After migrating my physical server to AWS recently I needed to shrink the boot volume a bit. The original server’s drive was ~1TB, so that’s the size my boot EBS volume was after the migration, but since I only have about 125GB of used space I wanted to reduce the overall volume size to about 150GB. Not surprisingly AWS doesn’t provide a native way to do this so I had to get creative. I found most of the steps on the AWS Developer Forum and have adapted them to my needs, along with adding a few. And just like with the physical server-to-cloud migration we’ll do here what many say can’t be done….
Step 1 – Create Snapshot of Volume
Using the AWS Console or AWS CLI create a snapshot of the volume you want to reduce, or an AMI of the instance. This will protect you in case something goes off the rails, making it quick and easy to recover.
aws ec2 create-snapshot --volume-id vol-1234567890abcdef0 --description "Snapshot of my root volume."
Step 2 – Shrink Volume
On the server in Disk Management, right-click the volume and select Shrink Volume. Select the desired size and let it run. Depending on a variety of factors this could take a while (several minutes to an hour or so) so be patient.
Step 3 – Stop Server and Detach Volume
When the volume shrink completes, stop (power off) the server. Preferably from within Windows, or use the AWS console or AWS CLI to do so. Then, detach the volume from the Windows server:
aws ec2 detach-volume --volume-id vol-1234567890abcdef0
Step 4 – Start Ubuntu EC2 Instance, Attach Volumes
Select the appropriate Ubuntu AMI (version and region) and launch it. Either through the web console or AWS CLI.
aws ec2 run instances --image-id-<AMI ID> --key-name <key> --count 1 --instance-type m1.large --security-group-ids sg-<SecGroup> --placement AvailabilityZone=<AZ>
Create a new EBS volume the size you want, making it at least the same size or larger than the shrunk volume size. Attach the volume you want to clone to the Ubuntu instance and choose a mount point. For this document we will use “sdo”. “o” for Original (Note “sdo” in the AWS interface gets remapped to “xvdo” in ubuntu). Attach the new volume you want to clone to to the Ubuntu instance and choose a mount point. We will use “sdn”. “n” for New. (Note “sdn” in the AWS interface gets remapped to “xvdn” in Ubuntu).
aws ec2 create-volume --size 150 --region <region%gt; --availability-zone <AZ> --volume-type gp2
aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-01474ef662b89480 --device /dev/sdo
aws ec2 attach-volume --volume-id vol-1234567890abcdef1 --instance-id i-01474ef662b89480 --device /dev/sdn
Step 5 – Connect to Ubuntu, Run Commands
Connect to the Ubuntu instance and elevate yourself to SU with sudo su.
View and Save partition information for the “original” disk:
fdisk -l -u /dev/xvdo
Setup partitions for the “new” disk:
fdisk /dev/xvdn
At the prompt (Command (m for help):
“n” to create a new partition
“p” to make it a primary partition
Select a partition number (match the original setup)
Enter the first sector (match the original setup)
Enter the last sector (for first partition match the original, for a second, use the default last sector)
“t” to set the partition type to 7 (HPFS/NTFS/exFAT) on all partitions
Repeat the above process for all needed partitions
“a” to set the boot flag on partition 1
“p” to review the setup
“w” to write changes to disk and exit
Run the following to verify settings on both “old” and “new” drives:
fdisk -l -u /dev/xvdo
fdisk -l -u /dev/xvdn
Copy the MBR (Master Boot Record). The MBR is on the first sector of the disk, and is split into three parts: Boot Code (446 bytes), Partition Table (64 bytes), Boot Code Signature = 55aa (2 bytes). We only want the boot code, and will copy it with the the “dd” command to do a direct bit copy from disk to disk:
dd if=/dev/xvdo of=/dev/xvdn bs=446 count=1
Clone the NTFS file system one partition at a time (/dev/xvdo1, /dev/svdo2):
ntfsclone --overwrite /dev/xvdn1 /dev/xvdo1
ntfsclone --overwrite /dev/xvdn2 /dev/xvdo2
Step 6 – Detach from Ubuntu, Attach to Windows
Detach both volumes from Ubuntu instance and attach new volume to Windows instance as device /dev/sda1:
aws ec2 detach-volume --volume-id vol-1234567890abcdef0
aws ec2 detach-volume --volume-id vol-1234567890abcdef1
aws ec2 attach-volume --volume-id vol-1234567890abcdef1 --instance-id i-01474ef662b8948a --device /dev/sda1
Step 7 – Verify and Cleanup
Start the Windows instance:
aws ec2 start-instances --instance-ids i-1234567890abcdef0
Note: it may take several minutes to start the instance, so don’t be impatient or alarmed….. Once the instance starts, logon to the Windows instance, then run chkdsk to validate the drive and correct any errors:
chkdsk c: /f
Terminate Ubuntu instance:
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0
Finally, as good measure make an AMI of your instance or Snapshot of the volume.

Doing What They Say Can’t be Done
I’ve had to do this task a few times but because they were separated by a significant amount of time both the methods changed slightly and my memory of the exact steps faded. These are the new-and-improved way to convert a bare metal Windows server (I’m doing 2008 R2, but it will work with 2012 R2, etc.) into an EC2 AMI. It took me several days and multiple attempts (mostly due to the time it took to copy the 120 GB image to AWS) and some trial and error, but if you follow these steps you should have success moving your server to the cloud.
Although various commercial tools exist to perform cloud migrations I used all free tools for this physical-to-cloud migration.
Review these prerequisites for VM Import to help ensure success. This AWS document is a good reference for other necessities and steps as well. According to AWS a physical server-to-AMI conversion cannot be done, but we’ll trick them a bit by converting to a Hyper-V VM from physical, then to an AMI, finally launching an EC2 instance.
Step 1 – Prepare Your Server
Prior to migration you should do a little house keeping to minimize potential issues and decrease the overall time the migration will take. Some suggestions are first, clean up your drive(s) by removing any unnecessary files and directories, this will make the virtual disk smaller and reduce time to copy files to AWS. Next, make sure at least one NIC has DHCP enabled (one of the things that will cause your import to fail). I also took the opportunity to make sure all apps and patches were up-to-date as well. I chose not to remove my server from the AD domain at this point – only after a successful import of the server into EC2.
Step 2 – Create VHD (Virtual Hard Disk) from Physical Server
This is simple with the free Sysinternals tool Disk2vhd. Download and run it. Select the volume(s) you want to turn into a VHD and the name of the destination VHD file (by default it uses the NetBIOS name of the server). Make sure to uncheck “use Vhdx” option as AWS will only allow you to import a VHD file and not a VHDX file. It is recommended that you save the VHD file to a drive other than one you are imaging, but since I didn’t have another drive at the time I wasn’t able to do that and the conversion worked fine. The server I am working on currently is named Alymere, so you’ll see that name throughout.
![]() |
|
|
Step 3 – Import VHD to Hyper-V
Use Hyper-V Manager to import the VHD exported in the previous step. I had another server (at another location) which I was able to install Hyper-V on to perform these steps, but I suppose you could do this on the origin server if it’s the only one you have. Maybe I’ll try it and update later….. Start your newly imported VM to make sure it boots and works as a VM, then shut it down. One critical step is to remove any CD or DVD drives from the VM as these too will cause the EC2/AMI import to fail.
Step 4 – Export VM to VHD
Again, using Hyper-V Manager export the VM. This will create a few directories and possibly additional files, but the only one you need is the new VHD file – in my case this is Alymere.vhd (although it’s the same name as the physical to virtual VHD file it is a different file).
Step 5 – Break up the VHD
If you’ve ever tried to copy a large file over the Internet you know it can be fraught with problems. So, for multiple reasons I used 7-zip to break the file into 100MB chunks. I did it as 7z format with a compression level of “normal.” Although it took about four hours for the compression I saved about eight times that much time when uploading to AWS. My ~120GB VHD file compressed to 41GB of zipped files.
Step 6 – Copy/Upload File(s) to EBS
Since I will have to extract my hundreds of 100MB files back to the original VHD I copied them to an EBS volume on one of my existing EC2 servers over a VPN connected to my VPC using robocopy. One of the reasons for breaking the original file into many smaller ones is that if there’s a problem with the copy (as is common over the Internet) I won’t lose much progress – yes this can be done by using the /z (restart) switch with robocopy – which I would highly recommend, but I’ve had better experiencing breaking large files into smaller ones. Another reason I did this was that the office where the server resides has terrible upload speeds, so I copied the files to an external drive and had my business partner take it to another office (I’m in a different state). It still took 2-3 days to copy the files from both locations to AWS, but was considerably faster doing it the way we did – copying zipped files from two locations to EC2 simultaneously.
Step 7 – Reassemble VHD
Once the files were all copied to my EBS volume on my EC2 server I used 7-zip to extract the files to the original VHD file. As mentioned previously this whole process (zip, copy, unzip) took several days, but using the methods described I feel it was the most efficient way possible given the circumstances. If you have low upload bandwidth or huge files it may make sense to use the AWS Import/Export service, which I’ve used with great success previously.
Step 8 – Copy/Upload to S3
In order to use AWS’s VM import/export the VHD file(s) have to reside in S3. Some tools (like my beloved CloudBerry) cannot copy files of this size (120 GB), so I used the AWS CLI. Namely, aws s3 cp:
aws s3 cp E:TempAlymereALYMERE.VHD s3://<bucket name>/Alymere/
The AWS CLI displays copy progress, including speed.
![]() |
|
|
Step 9 – Import VHD to AMI
This step requires a little extra work. Follow these steps to create an AMI role and policy necessary to access S3, along with the necessary .json files tailored to your needs. With that done run the command (adapted for your use):
aws ec2 import-image --description "Alymere" --disk-containers file://C:AWSJSONcontainers.json
Depending on the size of the VM this will take from a few minutes to an hour or more. My 120GB VHD took about 90-120 minutes. Executing the export-image command will produce various output including a task ID, which can be used to check the progress of the import:
aws ec2 describe-import-image-tasks --import-task-ids import-ami-<task ID>
I ran this several times and could see it progressing.
Upon completion the message, “AWS Windows completed…” is displayed, along with the AMI ID.
Step 10 – Launch EC2 Instance
Finally, an EC2 instance can be launched from this newly created AMI.
aws ec2 run-instances --image-id ami-<task ID>
Step 11 – Post Launch
At this point I logged onto my newly created VM to verify all my applications and data were intact. Since they were I removed my “old” physical server from my domain and joined this one. Success!
Troubleshooting VM Import/Export
A variety of issues can cause problems, i.e. failure. I would suggest reading the prerequisites for VM Import and “Troubleshooting VM Import/Export” AWS pages before beginning to make sure you can avoid issues, then be able to troubleshoot if necessary.
Good luck and happy importing!

Several years ago before the unified AWS CLI was released I wrote about installing the EC2 command line tools. Now it’s time to update that.
It goes without saying that over the decade Amazon has been providing cloud services their interfaces and tools have matured greatly (along with their overall offerings). Early on Amazon didn’t even have web console and we had to rely on a disparate offering of command line tools for managing S3, EC2, ELB, etc. Finally, in the fall of 2013 Amazon released the AWS CLI, which is a unified set of command line tools that works similarly across platforms (Windows, Linux, Mac) and across AWS services. It’s definitely nice to use the same (or nearly the same) syntax on both Windows and Linux machines to perform various functions.
Installing AWS CLI on Linux (Ubuntu)
The easiest way I’ve found to install and update the AWS CLI on Ubuntu (and Debian-based linux distros) is using apt, with this simple one-liner:
sudo apt install awscli -y
Once installed or updated check the version with:
aws --version
Installing AWS CLI on Windows
Assuming you are installing on Windows 2008 R2 or later, we’ll leverage the native capability of PowerShell to use Invoke-WebRequest (AKA wget) to retrieve the latest installer. I’m also assuming you have a directory c:temp. If not, create it or use a directory of your choosing. Note: these steps can be used to update the AWS CLI to the latest version as well. If you aren’t running at least PS v3 you can update PowerShell to get this functionality. And, you’ll have to restart PowerShell if you are doing a new install of the AWS CLI to access them.
To download the latest version of the installer in PowerShell run:
wget -uri https://s3.amazonaws.com/aws-cli/AWSCLI64.msi -OutFile c:tempAWSCLI64.msi
Once the CLI is installed and configured you can run various commands to test connectivity to AWS services.
aws s3 ls s3:// (list all S3 buckets)
aws ec2 describe-instances

Prior to updating to PowerShell version 5 make sure to be running the latest version of .NET Framework (.NET 4.6.1 as of this writing). See Checking .NET Versions.
Check Current Version of PowerShell
Run either $Host.Version or $PSVersionTable.PSVersion
Install latest version of .NET.
PowerShell 5 is part of Windows Management Framework (WMF) version 5 and can be downloaded from Microsoft (2008 R2 or 2012 R2). Select the correct download for your version of Windows Server and run the installer. After completion verify the version.