I'd like to start this post by discussing some history. Before GitHub Actions there was Azure DevOps Pipelines, which has awesome built-in integration with Azure VM Scale Sets. It is a great feature that I still use today. Some of the benefits of the Azure DevOps integration with VM Scale Sets include:
Ephemeral VMs. Each VM can be deleted after running a job.
Automatic scaling of the VMs based on demand.
You can run Docker operations directly on the VM. You don't have to worry about any of the issues or concerns that come from running Docker-in-Docker.
Azure DevOps automatically installs the agent software on each VM instance, and automatically connects it to your Azure DevOps project. You don't have to worry about installing the agent or dealing with authentication.
If you'd like to learn more about Azure DevOps Scale Set agents, then check out another one of my posts here.
What about GitHub Actions?
When I first began using GitHub Actions, the first thing I looked for was this integration with VM Scale Sets, just like I was doing in Azure DevOps. However, I quickly learned that this integration does not exist.
Sadly, GitHub's answer was to create your own standalone VMs, manually install the agent (runner) software on each one, and manually authenticate / connect each one to your GitHub account. Luckily, you can still use Microsoft's VM images, with all the included software, so at least there's that. Lastly, just like everything else in this scenario, scaling your VMs is a completely manual process.
Needless to say, this solution has some issues. One of the biggest ones is that there is no auto-scaling.
Basics of Actions Runner Controller
Since GitHub did not provide an option for self-hosted auto-scaling runners the community decided to step in and do something about it. They created Actions Runner Controller to fill this auto-scaling void. However, it must be noted that this solution is quite different than Azure VM Scale Sets. Actions Runner Controller utilizes containers running on Kubernetes. But, more on the technical details later. Let's dig into the backstory of Actions Runner Controller a little bit more.
This project was originally started by summerwind, and then later taken over by others, such as mumoshu and toast-gear. Eventually, it grew so big that GitHub took notice, took over the project themselves, and started adding new features to it. However, Actions Runner Controller is still in a strange "halfway" state. On one hand, you have the older features which are only supported by the community. On the other hand, you have the new features which were created by, and are officially supported by, Github. To make it even more confusing, there are multiple different GitHub organizations and multiple different repos where everything is currently stored. I tried my best to outline everything in the diagram you see below.
All legacy assets were put under the GitHub Organization named https://github.com/actions-runner-controller . Currently, there are 6 repos found there.
The latest and greatest assets for ARC can be found under a different GitHub Organization named https://github.com/actions . Inside the org is a repo called "Actions Runner Controller" and this is where the bulk of the solution is found. However, be warned, this repo is a mix of old / community-supported features, and new / GitHub-supported features. I show an example above, the "charts" folder, which currently includes both the old charts and the new charts. This repo also contains the old Dockerfiles (and only the old ones). To find the new Dockerfile you must go to another repo called "runner" which I've also shown above.
Yes, this is all quite confusing. GitHub tried to clarify things, as much as possible, in a discussion that can be found here.
Important: From this point on, this article will only focus on the new supported features of ARC that were created by GitHub.
Technical Details of ARC
With all that out of the way, let's finally discuss the technical details of Actions Runner Controller and how it works. Like I stated earlier, to use ARC you must be running Kubernetes. The process is fairly simple, and involves installing 2 different Helm charts into your cluster.
Helm Chart 1: The Controller(s)
The first chart will create a Kubernetes deployment made up of 1 replica (Pod), by default. But, if desired, you can pass overrides to the Helm chart in order to add extra, high-availability replicas to this deployment. It also creates other things, such as roles, role bindings, and service accounts. Lastly, it will create multiple new Custom Resource Definitions (CRD) in your cluster.
The point of this deployment is to run the various "Controllers" that are used to manage and orchestrate the ARC system.
You only need to install this chart once, as this 1 controller deployment can manage multiple different runner scale sets.
This chart should be deployed into its own unique namespace
A copy of the Helm chart can be found here
The default values for the chart can be found here
The container image deployed by the chart can be found here
Helm Chart 2: The Runner Scale Set(s)
The second chart will create a Runner Scale Set. A Runner Scale Set is a new term that we haven't really discussed yet. But, it essentially stands for a bunch of GitHub Runner Pods that are grouped together and auto-scaled. Using the term "Scale Set" is a little dubious, in my opinion. Naturally, you want to compare it with the Azure VM Scale Sets that are used by Azure DevOps. But, rest assured, these are definitely not the same as VM Scale Sets.
The chart also creates other things, such as roles, role bindings, and service accounts.
If desired, you can deploy this chart multiple times to deploy multiple different Runner Scale Sets.
This chart should be deployed into its own unique namespace
If desired, multiple Runner Scale Sets can share the same namespace
A copy of the Helm chart can be found here
The default values for the chart can be found here. Some important values that you may want to override include:
githubConfigUrl
What GitHub level do you want the Scale Set tied to? This can be an Enterprise, and Organization, or Repository
githubConfigSecret
Your Runner pods need a way to authenticate to GitHub. For best security, you should pre-create a Kubernetes secret that contains this authorization info. Then, you can reference that secret here. See this link for more info.
runnerGroup
This is the GitHub Runner Group that the Runner Scale Set will be attached to. This Runner Group must already exist in GitHub, the chart will not be able to create this for you.
minRunners
maxRunners
For each Runner Scale Set that you create, you will notice a new standalone "Listener" pod will get automatically created in namespace 1. One "Listener" pod will get created for every one Scale Set.
You should also see new pods get created in namespace 2. These are your ephemeral runner pods. You'll see them spin up and down as needed. For example, if you set the "minRunners" option to 2, then you should always see at least 2 runner pods in namespace 2.
Viewing this on GitHub
Let's say I ran the second Helm chart twice, and therefore I created 2 different Runner Scale Sets:
Name: arc-runner-set
minRunners: 2
runnerGroup: Default
Name: arc-runner-set-part-2
minRunners: 1
runnerGroup: Default
What is this going to look like when I view the "Default" Runner Group on GitHub?
I see that my single Runner Group ("Default") has entries for everything:
I see entries for each ephemeral runner pod:
arc-runner-set-8hfn6-runner-cb7kb
arc-runner-set-8hfn6-runner-74cdh
arc-runner-set-part-2-fdcqj-runner-hw5sq
I also see entries for both scale sets:
arc-runner-set
arc-runner-set-part-2
The entries for each scale set are special. For example, if I click on the entry for arc-runner-set then I see the following:
You will notice that labels are assigned only to the scale set entries. And that is precisely the label I use when I want my GitHub Workflows to target this Scale Set. For example, to target this specific Scale Set, I would use this configuration in my GitHub Workflow job:
runs-on: arc-runner-set
Conclusion
This has been a fairly high-level overview or ARC. I hope you got some benefit out of it. There's a lot to understand about the history of this project and how it came to be. Even today, if you search for ARC you will likely get a lot of references to the original project created by Summerwind. Hopefully, my article will help clear any confusion about the current state of the project.
There are a lot of topics left to cover about ARC that I didn't even touch. For example, how to create and use your own container image, how to switch between Docker-in-Docker and Kubernetes modes, and more. If you are interested in more posts about ARC then please let me know.
Comments