Bitbucket Pipeline + Docker + Gradle Automatic Deployment Practice

Relying on manual deployment has always been a source of frustration for me. Every time I need to deploy, I tend to procrastinate and think that a simple deployment will do, too lazy to bother with automatic deployment (continuous integration). However, when it comes time to deploy again, I have to go through the headache all over again.


Today, I'm determined to set up automatic deployment. Since my needs are relatively simple, just to deploy a standalone service, I didn't bother with Jenkins, as it would be overkill. Since my code is hosted on Bitbucket, I used the Bitbucket Pipeline service, which allows me to define some simple workflow tasks. After each code push, it can automatically build and deploy.

(The free quota for Bitbucket Pipeline is 50 minutes of build time per month, which should be enough for individual developers, but try not to perform time-consuming operations on the pipeline)

Additionally, to prevent build errors during deployment and possible future architecture upgrades, I package the project into a Docker image, host the image independently, and then let the server update the code by pulling the latest Docker image.


With the concept in mind, it's time to start "assembling the toy", but before that, we need to first confirm the materials at hand.

  • Code Hosting Platform + Build Platform: Bitbucket
  • Docker Image Hosting: Google Container Registry (
  • one vps instance
  • Software to be deployed: Gradle + Multi-module Spring Boot project


Configure GCR

You can enable gcr for your project directly according to Google's documentation.

Configuring Gradle to package Docker

Use bmuschko/gradle-docker-plugin to handle Gradle Docker Build. According to the documentation, this plugin conveniently supports zero-configuration packaging of Spring Boot applications.

Introduce the plugin in the build.gradle at the root directory of the project.

buildscript {
    ext {
        gradleDockerPluginVersion = '4.0.4' // 查看插件的 Github Release 页面获得最新版本
    repositories {
    dependencies {

Use the plugin in the build.gradle of the submodule that needs to be packaged.

apply plugin: 'com.bmuschko.docker-remote-api'
apply plugin: 'com.bmuschko.docker-spring-boot-application'

Configure Docker packaging options in the build.gradle of the submodule

docker {
    registryCredentials {
        url = ''
        username = '_json_key'
        password = file(project.rootDir.path + '/gcr_keyfile.json').text
    springBootApplication {
        baseImage = 'openjdk:8-alpine'
        ports = [8080]
        tag = "" + version

Let's explain registryCredentials, which is used to configure the node for image hosting. The url should be directly filled in with the corresponding gcr address. If there are multiple addresses, please decide according to the actual hosting location. The username is directly _json_key, indicating that the authentication method is a json file. The password is the content in keyfile.json, the json configuration is in the project root directory, and the gradle submodule is called, so you need to configure the relative path. In addition, for the acquisition method of keyfile.json, please refer to the gcr documentation.

The springBootApplication node is used to configure the final image information, which needs no further explanation.

This way, you can package normally. If you have installed Docker on your local machine and opened the local port 2375, you can directly proceed with the test.

gradlew :app:dockerPushImage

Configure vps

It goes without saying that Docker needs to be properly configured on the vps. In addition, in order to pull smoothly, the GCloud SDK also needs to be set up. You can do this according to the Google documentation.

Configure the automatic deployment script

Simply using pipeline for ssh operations cannot handle situations where errors occur. Therefore, it is necessary to configure a one-click deployment script on the host. When the time comes, you can directly execute this script via ssh.

The following needs to be customized ${custom content required}

cd ~
#!/usr/bin/env bash

if [[  "$(docker ps -q -f name=${container name})" ]]; then
    docker update --restart=no ${container name}
    docker stop ${container name}
    docker rm ${container name}

docker pull ${image name}
docker run -d --restart always -p 8080:8080 --name ${container name} ${image name}
chmod 777

Alright, next, you just need to execute ~/ on the host to automatically deploy the latest image.

Connecting Bitbucket and Compute Instance

Navigate to the corresponding Bitbucket project's Settings->SSH keys to generate a new SSH key, copy the public key and add it to the vps.

Additionally, you need to add the host address of the compute instance to the Known hosts in Bitbucket.

Writing Pipeline Scripts

It can be said that this is the most important step, as it is through the script that the previously prepared resources are linked together. However, this is ironically placed at the end, making it seem not so important.

Since the authorization for pushing images has already been configured in gradle, there is no need to authorize gcr in the pipeline environment.

So the entire script is divided into two steps.

  • build + push image
  • SSH into the VPS, stop and delete the running old containers, pull the new image and redeploy.
    - step:
        name: Deploy to Docker
        image: openjdk:8
          - gradle
          - docker
          - docker
          - chmod +x gradlew
          - bash ./gradlew :app:dockerPushImage
    - step:
        name: Deploy to Production
        deployment: production
        trigger: manual # 配置为手动, 使上线可控
          - ssh -T user@host
          - echo "Successful deployment to Production"