Accessing SQL Azure from Kubernetes on Google Cloud Platform
Today we’re sharing a small open-source project which we use as a part of our cloud / containerisation strategy.
It takes the form of a straightforward Linux bash script wrapped up in a Docker container, and allows the use of SQL Azure from a Kubernetes cluster on another cloud provider – for example GCP.
The issue you’ll encounter by default is that each Kubernetes node is assigned a new public IP at random. You can open the database server firewall, but as soon as the node disappears or your pod pops up somewhere else, it’ll be blocked again. There are a few open discussions and feature requests around dealing with this, but in the meantime the recommendation seems to be to set up another VM with NAT to route the outbound traffic through a single, known IP.
Alternatively, there is another approach specifically for SQL Azure. Here’s what you can do:
1. Create a secret in Kubernetes with your Azure username and password:
2. Add a new init container to your deployment or pod specification YAML file:
3. Redeploy and you’re set!
Check the init container logs to make sure everything worked.
Check out the source at https://github.com/RedRiverSoftware/sql-azure-firewall-opener.
The Docker container is based on microsoft/azure-cli and uses http://whatismyip.akamai.com/ to determine public IP.
The enclosed bash script uses environment variables as parameters:
- rulename – the name of the firewall rule to create/update. Defaults to azsql_(hostname). Will only work for single-replica deployments/pods if you specify a fixed value.
- username – Azure username
- password – Azure password
- fqdn – the SQL Azure database server host
Note that this should be used as a part of a wider strategy – by default it will add a new firewall rule for each public IP, but not retire unused ones.
Source code and a Docker image are available now. Questions, issues and pull requests all very welcome over at GitHub!