CloudFlare API and PowerShell

CloudFlare has a nicely documented and approachable API for automating all aspects of their growing range of DNS, security and protection features.

The API lives here: https://api.cloudflare.com/

My interest was around quickly getting PowerShell to pull existing external DNS records and bring them into CloudFlare (and create a zone)

The final script was simple. Only really three parameters to change depending on domain name and your CloudFlare authentication email and API key.

Additionally, the jump_start=$true parameter tells the API to fetch existing DNS records it finds and add them to the new zone created on CloudFlare.

$domainname = "my-domain-name.com"
$auth_email = "info@my-email.com"
$auth_key = "my-cloudflare-api-key"

$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("X-Auth-Email", $auth_email)
$headers.Add("X-Auth-Key", $auth_key)

$data = @{
    name=$domainname
    jump_start=$true
}

$uri = "https://api.cloudflare.com/client/v4/zones"

$json =$data | ConvertTo-Json

Invoke-RestMethod -Uri $uri -Method Post -Headers $headers -Body $json -ContentType 'application/json'

etcd CoreOS cluster on Azure

The basics – what is etcd?

etcd is an open source distributed key value store.

It was created by CoreOS to provide a store for data and service discovery for their distributed applications like fleet,flannel and locksmithd.

etcd exposes a REST based API for applications to communicate

etcd runs as a daemon across all cluster members. Clusters should be created in the recommended odd numbers of 3,5,7 etc nodes to allow for node failures and a quorum (>51%) to remain.

Clusters run with a single leader and other cluster members following. If the leader fails, remaining nodes through a consensus algorithm elect a new one.

What’s it used for?

Apart from the CoreOS use of etcd, who else uses it?

The big one is Kubernetes – Management, orchestration and service discovery of containers across a cluster of worker nodes. etcd used to store cluster state and configuration data and is core to the operation of Kuberbetes.

Apache Mesos and Docker Swarm can also optionally use etcd as their cluster KV stores.

etcd on Azure?

I decided to use the Azure CLI to deploy a three node etcd cluster.

I could have also used PowerShell on Windows or authored an ARM template, but it was interesting to use the Azure CLI, as I tend to use OS X and a Linux desktop more than Windows these days.

The final script sets up:
-An Azure resource group
-An Azure Storage account
-A virtual network with single subnet
-Three vnics with static IPs
-Three Public IPs for connecting in via SSH later if required
-Finally three CoreOS Linux VMs, that each use a cloud-config.yaml file to automate the configuration of the etcd cluster and Systemd units. I also add in my public SSH key to allow remote SSH logins if required.

The script can be git cloned from https://github.com/seedb/AzureCLI.etcd.git

Alternatively, here it is:

#!/bin/bash
#Tested on MacOS X 10.11.3 and Ubuntu 15.10 with Azure CLI tools installed
#https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/
#Use azure login to get a token against your Azure subscription before running script
#--------------------------------------------------------------------------------------------
#Variables to change
location="australiasoutheast" #Azure DC Location of choice
resourcegroup="etcd-cluster" #Azure Resource Group name
vnetname="etcd-vnet" #Virtual Network Name
vnetaddr="10.1.0.0/16" #Virtual Network CIDR
subnetname="etcd-subnet" #Subnet name
subnetaddr="10.1.0.0/24" #Subnet CIDR
availgroup="etcd-avail-group" #Availibilty group for VMs
storageacname="etcdstorage01" #Storage Account for the VMs
networksecgroup="etcd-net-sec" #Network Sec Group for VMs
#VM name and private IP for each
vm01_name="etcd-01"
vm_static_IP1="10.1.0.51"
vm02_name="etcd-02"
vm_static_IP2="10.1.0.52"
vm03_name="etcd-03"
vm_static_IP3="10.1.0.53"
coreos_image="coreos:coreos:Stable:835.9.0" #CoreOS image to use
vm_size="Basic_A0" #VM Sizes can be listed by using azure vm sizes --location YourAzureDCLocaitonOfChoice
#---------------------------------------------------------------------------------------------
#Azure CLI------------------------------------------------------------------------------------
#Azure Resource Group setup
azure config mode arm
azure group create --location $location $resourcegroup
#Azure Storage account setup
azure storage account create --location $location --resource-group $resourcegroup --type lrs $storageacname
#Azure Network Security Group setup
azure network nsg create --resource-group $resourcegroup --location $location --name $networksecgroup
#Virtual network and subnet setup
azure network vnet create --location $location --address-prefixes $vnetaddr --resource-group $resourcegroup --name $vnetname
azure network vnet subnet create --address-prefix $subnetaddr --resource-group $resourcegroup --vnet-name $vnetname --name $subnetname --network-security-group-name $networksecgroup
#Public IP for VMs (can create SSH inbound rules to access these if required)
azure network public-ip create --resource-group $resourcegroup --location $location --name "$vm01_name"-pub-ip
azure network public-ip create --resource-group $resourcegroup --location $location --name "$vm02_name"-pub-ip
azure network public-ip create --resource-group $resourcegroup --location $location --name "$vm03_name"-pub-ip
#Virtual Nics with private IPs for CoreOS VMs
azure network nic create --resource-group $resourcegroup --subnet-vnet-name $vnetname --subnet-name $subnetname --location $location --name "$vm01_name"-priv-nic --private-ip-address $vm_static_IP1 --network-security-group-name $networksecgroup --public-ip-name "$vm01_name"-pub-ip
azure network nic create --resource-group $resourcegroup --subnet-vnet-name $vnetname --subnet-name $subnetname --location $location --name "$vm02_name"-priv-nic --private-ip-address $vm_static_IP2 --network-security-group-name $networksecgroup --public-ip-name "$vm02_name"-pub-ip
azure network nic create --resource-group $resourcegroup --subnet-vnet-name $vnetname --subnet-name $subnetname --location $location --name "$vm03_name"-priv-nic --private-ip-address $vm_static_IP3 --network-security-group-name $networksecgroup --public-ip-name "$vm03_name"-pub-ip
#Create 3 CoreOS-Stable VMs, fly in cloud-config file, also provide ssh public key to connect with in future
azure vm create --custom-data=etcd-01-cloud-config.yaml --ssh-publickey-file=id_rsa.pub --admin-username core --name $vm01_name --vm-size $vm_size --resource-group $resourcegroup --vnet-subnet-name $subnetname --os-type linux --availset-name $availgroup --location $location --image-urn $coreos_image --nic-names "$vm01_name"-priv-nic --storage-account-name $storageacname
azure vm create --custom-data=etcd-02-cloud-config.yaml --ssh-publickey-file=id_rsa.pub --admin-username core --name $vm02_name --vm-size $vm_size --resource-group $resourcegroup --vnet-subnet-name $subnetname --os-type linux --availset-name $availgroup --location $location --image-urn $coreos_image --nic-names "$vm02_name"-priv-nic --storage-account-name $storageacname
azure vm create --custom-data=etcd-03-cloud-config.yaml --ssh-publickey-file=id_rsa.pub --admin-username core --name $vm03_name --vm-size $vm_size --resource-group $resourcegroup --vnet-subnet-name $subnetname --os-type linux --availset-name $availgroup --location $location --image-urn $coreos_image --nic-names "$vm03_name"-priv-nic --storage-account-name $storageacname
#Azure CLI------------------------------------------------------------------------------------

Generally it takes around 10 minutes to deploy all the components (sometimes a storage account can take 15 minutes by itself – bizarre but repeatable!) But after it’s done, your resource group should look like this.

Screen Shot 2016-03-26 at 2.02.10 PM

At this point, I created an SSH inbound rule into one of the public IPs on one VM and test that my etcd cluster is all good.

Screen Shot 2016-03-26 at 2.14.24 PM

I can then insert some key-value pairs for testing and then think about deploying Kubernetes. All in another blog:)

Bulk creating VMs using Nutanix Acropolis ACLI

I have been playing around with the fantastic Nutanix Community Edition (CE) now for a week or more.

Despite the availability of all KVM VM actions in the HTML5 Prism GUI, I wanted to quickly create any number of VMs for testing.

The best way to do this for me was from the shell of the Nutanix controller VM.

I SSH’d into the CVM on my Nutanix CE node, then from the bash shell ran my script which calls the Acropolis CLI commands for creating and modifying VMs.

It loops for 10 iterations, sets VMs to 2GB of memory and 1 virtual CPU. It also attaches a 20GB disk for each VM on my NDFS container called CNTR1

for n in {1..10}
do
acli vm.create MyVM$n memory=2048 num_vcpus=1
acli vm.disk_create MyVM$n create_size=20G container=CNTR1
done

ntnx1

This is the tip of the iceberg really, in the script you could also mount an ISO file from your container to each VM and power them on if you wanted to. Better still, clone from an existing VM and create 10 that way would have been better, but the ACLI does not appear to have vm.clone yet as shown in the Nutanix Bible: Capture Or the Acropolis Virtualization Administration docs: Capture2

ACLI commands present on the Community Edition (currently 2015.06.08 version) – are missing vm.clonentnx3 Make sure you check out the new “Book of Acropolis” section in the Nutanix Bible: http://stevenpoitras.com/the-nutanix-bible/#_Toc422750852

PowerShell/PowerCLI – vMotion VMs between clusters based on Name and IP

A quick post where I threw together a script to vMotion all VMs of a particular wildcard name and a certain IP range between two vSphere clusters (5.0 and 5.5) under the same vCenter Server.

The script assumes your source and destination clusters are using Standard vSwitches (VSS). with identical port group names on each.

In a migration scenario, perhaps your source cluster is using a Standard vSwitch, but the new destination cluster has a vSphere Distributed Switch (VDS).

In this case, I would implement a temporary standard vSwitch with a single NIC on the new cluster with identical port group names as the source cluster.

This makes vMotion migration simple. Once VMs are on the new cluster, vNICs can be bounced from standard to VDS port groups.

Anyway – on with the script. PowerCLI version used was 6.0 Release 1

#vMotion VMs based on name and IP address between clusters under one vCenter
#Assumes standard vSwitch networking and consistent port group names between clusters
#Source cluster (Std vSwitch), destination Cluster (temp migration Std vSwitch) with same port group names. Maker sure Log folder is created ahead of time

Import-Module VMware.VimAutomation.Core

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope Session -Confirm:$false

#Change your variables here:
$vcenter = "MYvCenterServer"
$sourcecluster = "MyOldCluster"
$destcluster = "MyNewCluster"
$VMnameFilter = "PROD*"
$IPFilter = "10.1.3.*"
$logfolder = "c:\MigrationLogs"

Connect-VIServer $vcenter

$getvms = Get-Cluster $sourcecluster | Get-VM | where name -Like $VMnameFilter | Select Name, @{ L = "IP Address"; E = { ($_.guest.IPAddress[0]) } } | where "IP Address" -Like $IPFilter | select -ExpandProperty name

foreach ($vm in $getvms)
{
Move-VM -vm $vm -Destination $destcluster -VMotionPriority High -Confirm: $false -ErrorAction SilentlyContinue

If (!$error)
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\success.log" }
Else
{ Write-Output $vm, $error[0] | out-file -append -filepath "$logfolder\failed.log" }

}

Using Azure PowerShell to deploy Windows Server Technical Preview (Windows Server 2015)

I whipped up a script so I didn’t have to use the Azure portal wizard to easily spin up Windows Server 10 VM’s for further testing.

It’s mostly self explanatory, the Azure PowerShell cmdlets have great help and online resources are immense as well.
http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/

Basically, it connects to Azure with your provided credentials, grabs existing subscription ID and an existing storage account (helpful to have this sorted in Azure portal first)

You need to provide username, password for your new VM and cloud service name in the script (*Change Me*)

It will provide out-gridview prompts to select an existing private virtual network in Azure (again helpful to have created one)

Also prompts for VM instance size – I use Basic A0 for testing (no availability set)

It also prompts to pick the Azure Datacenter. The VM must be created in the same DC as the storage account. For New Zealand customers, West US seems fastest. No hard testing behind that opinion.

It automatically picks the Windows Server Technical Preview image, but you can change that to whatever image takes your fancy really. You can get a list using Get-AzureVMImage.

Here is the script – probably tons of tweaks and optimisations to do, but it works for my basic testing. Comments and suggestions are very welcome.

#Connect and auth PS session against your Azure creds
Add-AzureAccount

#Get subscription ID of your azure account
$MySubID = Get-AzureSubscription | select -expand subscriptionid

#Select existing storage account - VM must be created in same DC as storage account
$MyStorageAccountName = Get-AzureStorageAccount | out-gridview -passthru | select -expand StorageAccountName

#Name of your VM
$MyVMName = "*Change Me*"

#Username for your Windows VM
$MyVMAdminUsername = "*Change Me*"

#Password for your Windows VM
$MyVMAdminPassword = "*Change Me*"

#New Cloud Service name
$MYCloudServiceName = "*Change Me*"

#Select internal Azure virtual network for VM (must exist already). Comment out line if you want to use automatic public connection
$MyVMNetwork = Get-AzureVNetSite | Out-GridView -PassThru | select -expand name $MyVMSubnet = Get-AzureVNetSite | Out-GridView -PassThru | Select-Object Subnets -ExpandProperty Subnets | select -expand name

#Select VM Instance size from list
$MyInstanceSize = Get-AzureRoleSize | Out-GridView -passthru | select -expand instancesize

#Select Datacenter location from list
$MyVMDCLocation = Get-AzureLocation | Out-GridView -PassThru | select -expand name

#Set the Azure Subscription and the Storage account to be used
Set-AzureSubscription -SubscriptionId $MySubID -CurrentStorageAccountName $MyStorageAccountName

#Select Windows Server Technical Preview - i.e. Server 10
$MyImageName = Get-AzureVMImage | where {$_.Label -eq "Windows Server Technical Preview"} | select -expand imagename

#Build VM
New-AzureQuickVM -Windows -ServiceName $MYCloudServiceName -Name $MyVMName -ImageName $MyImageName -AdminUsername $MyVMAdminUsername -Password $MyVMAdminPassword -Location $MyVMDCLocation -InstanceSize $MyInstanceSize -VNetName $MyVMNetwork -SubnetNames $MyVMSubnet

Jumbo frames performance with 10GbE iSCSI

This is yet another voice to add to the opinion that jumbo frames are worth the effort.

This is especially true in a 10GbE iSCSI environment doing a hardware refresh of hosts, storage and switches or a pure greenfield deployment.

The ease of configuring jumbo frames across your environment is helped by a couple of things:

Most 10GbE switches ship with an MTU of 9214 or 9216 out of the box, so are ready for host and storage to be tuned to match. The Arista 7150 I have tested are configured out of the box this way.

Changing vSphere virtual switches and VMkernel interfaces to a large MTU size is now very easy via the vSphere client and Web Client.

On with the testing.

Hardware I tested with:

vSphere host: Cisco UCSC-C220-M3S, 16-core, 196GB Memory, Intel X520 Dual Port 10Gb SFP+ Adapter. This was running ESXi 5.1 Update 1.

Switch: Arista 7150S-24 (24 SFP+ ports)

Storage: Nimble Storage CS260G (10GbE iSCSI)

This was not meant to be an all encompassing test across all known workloads. Just some synthetic sequential benchmarks to see what difference existed between a default MTU of 1500 bytes vs 9000 bytes for iSCSI.

I set up a VM with Windows Server 2012, 4 x vCPU, 8GB RAM, 100GB system disk and a 100GB data disk (for iometer). This was on a 1TB iSCSI VMFS5 datastore presented from the Nimble array.

Using the good old IOMeter, I setup a basic sequential read test of a 2GB file with 32 outstanding I/O’s and then ran 4K, 8K and 32K tests for 5 minutes (once with ESXi vSwitches/VMkernel interfaces and Nimble ‘s data interfaces at 1500 MTU and then once with them set to 9000 MTU). The Arista was set to an MTU of 9214 across all ports at all times (the default)

jumbo frames

The results showed a consistent 10%+ performance increase in both throughput and bandwidth.

It would be difficult not to recommend jumbo frames as a default configuration for new 10GbE iSCSI or NAS environments.

For more great in-depth reading on jumbo frames and vSphere, make sure to check out Michael Wesbter’s (@vcdxnz001) posts on his blog:

http://longwhiteclouds.com/2012/02/20/jumbo-frames-on-vsphere-5/

http://longwhiteclouds.com/2013/09/10/the-great-jumbo-frames-debate/

iSCSI configuration on Extreme Networks switches

This is a production configuration I have used with 1GbE and 10GbE Extreme Networks switches for iSCSI storage, specifically a Nimble Storage CS260G with 1GbE management interfaces and 10GbE iSCSI data interfaces.

10GbE iSCSI is on one stacked pair of Extreme x670’s and the 1GbE management interfaces are on an x460 stack.

extreme_nimble

This is the ExtremeOS configuration I have for both networks.

#iSCSI data ports (10GbE) Extreme Networks x670 (stacked pair)

configure ports 1:1, 1:2, 2:1, 2:2 display-string “Nimble01-Prod-iSCSI”
enable flow-control tx-pause ports 1:1, 1:2, 2:1, 2:2
enable flow-control rx-pause ports 1:1, 1:2, 2:1, 2:2
configure vlan “iSCSI_Data” tag 140

configure vlan “iSCSI_Data” ipaddress 10.190.201.250 255.255.255.0
configure vlan “iSCSI_Data” add ports 1:1,1:2,2:1,2:2

#Mgmt ports (1GbE) Extreme Networks x460 (stacked pair)

configure ports 1:1,1:2,2:1,2:2 display-string “Nimble01-Prod-Mgmt”
configure vlan “Storage_Mgmt” tag 130
configure vlan “Storage_Mgmt” ipaddress 10.201.201.1 255.255.255.0
enable ipforwarding vlan “Storage_Mgmt”
configure vlan “Storage_Mgmt” add ports 1:1,1:2,2:1,2:2

I have no spanning tree or rate-limiting (also known as broadcast storm control) configured on any iSCSI or management ports. These are not recommended for iSCSI performance reasons.

The important architectural points are:

  • iSCSI and management have dedicated VLAN’s for isolation, performance and security/policy etc .
  • The switch ports carrying iSCSI data have send/receive flow control enabled, for performance consistency.
  • Switches carrying management and iSCSI data are stacked for redundancy.

http://www.extremenetworks.com/products/summit-all.aspx

http://www.nimblestorage.com/products/specifications.php