Every developer knows the feeling when a project starts to grow. More and more code is added, tests pile up, new features appear – and suddenly, something that used to compile in a minute now takes half an hour or even three. During that time, the computer is practically unusable – CPU at 100%, RAM maxed out, and the only thing left to do is… wait.
Sure, a performance problem like this could be solved by buying every developer a high‑end workstation costing thousands, but that’s not always possible – financially or organizationally. That’s why we started looking for another solution: instead of upgrading every single laptop, what if we could simply offload the heavy lifting somewhere else?
That’s where the idea of a remote build server comes in.
Instead of equipping everyone with a high-end workstation, we decided to try something different – offloading builds to a dedicated remote server. Here’s what that looked like in real life:
Alright, but how can a laptop that doesn’t have 64 cores and hundreds of gigabytes of RAM suddenly take advantage of that kind of power?
In short: Docker and BuildKit. Your laptop becomes a command center, offloading the heavy lifting via the network but without messy changes in your codebase.
Thanks to Docker, the whole setup is extremely straightforward:
For building Docker images, we decided to use BuildKit, which allows exposing a server dedicated to building images. It comes with many options, but in the simplest scenario, starting a build server boils down to running a BuildKit container.
Here is an example of how to launch such a server:
```bash
docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234
```
From the developers' perspective, using the remote build server is limited to just two simple commands, adding a driver configuration, and setting the server as the default for buildx:
```bash
docker buildx create --name remote-build-server --driver remote tcp://1.2.3.4:1234
docker buildx use remote-build-server
```
To build any Dockerfile on the remote server, instead of the standard docker build, you use the command:
`docker buildx build --builder remote-build-server --load`
Interestingly – and particularly useful for larger projects – Docker Compose by default builds on the remote server. The effect is that images build in just a few minutes, making the implementation and testing of new features much faster and more pleasant. During the build process, the developer’s computer essentially gets a break, so they can, for example, comfortably answer emails or participate in client meetings without the risk of their system crashing due to resource exhaustion.
The next step was creating user accounts – this is a straightforward process with no big complexities. Standard system accounts were set up, with SSH access granted to everyone. This allows easy mounting of the project directory on the server directly in VSCode or another IDE, or simply working via terminal, depending on the user's comfort level.
An additional challenge was enabling graphical sessions for users – needed, for example, to test native Linux applications. After all, not everyone uses Linux, and both Windows and macOS have some limitations in this regard.
The chosen solution was the MATE desktop environment, which is lightweight and user-friendly. Running virtual machines was ruled out as it would further strain already heavily loaded hardware.
For this purpose, TigerVNC was used to provide virtual graphical sessions. Each user is mapped to a specific display in the vncserver.users configuration file. For example:
```
:1=admin
:2=john.smith
```
Control of individual users’ VNC sessions is managed through systemd services – each user has an assigned service that starts a session on a specific display. The advantage of this approach is that the service can be easily started automatically at system boot, permissions can be restricted, and necessary environment variables or other parameters can be set.
Once VNC is configured, a user can log in by connecting via a VNC client. For example, the user admin would enter 1.2.3.4:1 with their password, while john.smith would use 1.2.3.4:2, and so on for other users.
Under the hood, the server actually runs on the default VNC port plus the display number – so in our example, ports 5901 and 5902 respectively. Connecting via exact port numbers also works and can be useful in custom setups where standard ports are changed.
Each developer can use this solution in the way that suits them best:
Both approaches lead to the same outcome: fast builds without putting a load on local hardware.
One of our developers describes their daily workflow with the remote build server:
I can log in to the RBS either through VNC (a full virtual desktop with browser and GUI apps) or via SSH. The server provides huge resources – 250 GB of RAM, for example – which makes it possible not only to build large projects but also to run extensive test suites that would be practically impossible locally. I’ve also configured IntelliJ to connect directly to repositories on the RBS. That way, I don’t need to keep the code on my laptop, which has limited resources. My machine is just a client – all the heavy processing happens on the server. This significantly reduces the load on my laptop and makes working far more comfortable.
We have been using this solution for over a month now, and the effects in daily work are clearly noticeable. For the purpose of this article, however, we prepared an additional “live” test to show the difference in as objective conditions as possible.
For the benchmark, we chose to build a Docker image from the source code of Rustypaste – a minimalist, standalone service for file sharing and link shortening, similar to a private pastebin service. We deliberately chose Rustypaste because we couldn’t use our client’s code, and the project is complex enough to realistically reflect typical loads.
To ensure accurate measurements, we excluded the time spent downloading Docker image layers and focused exclusively on the build process itself. The results showed a clear contrast between the local system and the remote server:
The difference is more than twofold – for larger projects, such savings translate into tens of minutes per day, and over a month, even hours of regained time. Additionally, the local computer remains fully responsive and does not slow down, which significantly improves the comfort of work.
In summary, implementing a remote build server delivers truly measurable benefits. Most importantly, it requires no changes in code or additional project configuration from the team – developers can work exactly as they did before. The main difference is that instead of putting load on their laptop, all the heavy work runs on a powerful server.
This means the local machine no longer runs at 100% CPU, doesn’t get bogged down, and allows smooth multitasking during builds. Build times shorten on average by two to three times, and for really large projects, the difference can be even more dramatic.
From a financial perspective, this is also a more cost-effective solution – one high-performance server costs much less than premium workstations for the entire team.
Finally, every developer can use the system in the way they prefer – some like the terminal and classic commands, others prefer the convenience of a remote desktop – but regardless of the chosen mode, the result is the same: faster builds, smoother work, and less frustration waiting for compilation.