Troubleshooting Auto DevOps (FREE)
The information in this documentation page describes common errors when using Auto DevOps, and any available workarounds.
Trace Helm commands
Set the CI/CD variable
TRACE to any value to make Helm commands produce verbose output. You can use this output to diagnose Auto DevOps deployment problems.
You can resolve some problems with Auto DevOps deployment by changing advanced Auto DevOps configuration variables. Read more about customizing Auto DevOps CI/CD variables.
Unable to select a buildpack
Auto Build and Auto Test may fail to detect your language or framework with the following error:
Step 5/11 : RUN /bin/herokuish buildpack build ---> Running in eb468cd46085 -----> Unable to select a buildpack The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1
The following are possible reasons:
- Your application may be missing the key files the buildpack is looking for.
Ruby applications require a
Gemfileto be properly detected, even though it's possible to write a Ruby app without a
- No buildpack may exist for your application. Try specifying a custom buildpack.
Pipeline that extends Auto DevOps with only / except fails
If your pipeline fails with the following message:
Unable to create pipeline jobs:test config key may not be used with `rules`: only
This error appears when the included job's rules configuration has been overridden with the
To fix this issue, you must either:
- Transition your
only/exceptsyntax to rules.
- (Temporarily) Pin your templates to the GitLab 12.10 based templates.
Failure to create a Kubernetes namespace
Auto Deploy fails if GitLab can't create a Kubernetes namespace and service account for your project. For help debugging this issue, see Troubleshooting failed deployment jobs.
Detected an existing PostgreSQL database
After upgrading to GitLab 13.0, you may encounter this message when deploying with Auto DevOps:
Detected an existing PostgreSQL database installed on the deprecated channel 1, but the current channel is set to 2. The default channel changed to 2 in of GitLab 13.0. [...]
Auto DevOps, by default, installs an in-cluster PostgreSQL database alongside your application. The default installation method changed in GitLab 13.0, and upgrading existing databases requires user involvement. The two installation methods are:
- channel 1 (deprecated): Pulls in the database as a dependency of the associated Helm chart. Only supports Kubernetes versions up to version 1.15.
- channel 2 (current): Installs the database as an independent Helm chart. Required for using the in-cluster database feature with Kubernetes versions 1.16 and greater.
If you receive this error, you can do one of the following actions:
You can safely ignore the warning and continue using the channel 1 PostgreSQL database by setting
You can delete the channel 1 PostgreSQL database and install a fresh channel 2 database by setting
AUTO_DEVOPS_POSTGRES_DELETE_V1to a non-empty value and redeploying.
WARNING: Deleting the channel 1 PostgreSQL database permanently deletes the existing channel 1 database and all its data. See Upgrading PostgreSQL for more information on backing up and upgrading your database.
If you are not using the in-cluster database, you can set
falseand re-deploy. This option is especially relevant to users of custom charts without the in-chart PostgreSQL dependency. Database auto-detection is based on the
postgresql.enabledHelm value for your release. This value is set based on the
POSTGRES_ENABLEDCI/CD variable and persisted by Helm, regardless of whether or not your chart uses the variable.
false permanently deletes any existing
channel 1 database for your environment.
Error: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
After upgrading your Kubernetes cluster to v1.16+, you may encounter this message when deploying with Auto DevOps:
UPGRADE FAILED Error: failed decoding reader into objects: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
This can occur if your current deployments on the environment namespace were deployed with a
deprecated/removed API that doesn't exist in Kubernetes v1.16+. For example,
if your in-cluster PostgreSQL was installed in a legacy way,
the resource was created via the
extensions/v1beta1 API. However, the deployment resource
was moved to the
app/v1 API in v1.16.
To recover such outdated resources, you must convert the current deployments by mapping legacy APIs
to newer APIs. There is a helper tool called
that works for this problem. Follow these steps to use the tool in Auto DevOps:
include: - template: Auto-DevOps.gitlab-ci.yml - remote: https://gitlab.com/shinya.maeda/ci-templates/-/raw/master/map-deprecated-api.gitlab-ci.yml variables: HELM_VERSION_FOR_MAPKUBEAPIS: "v2" # If you're using auto-depoy-image v2 or above, please specify "v3".
Run the job
<environment-name>:map-deprecated-api. Ensure that this job succeeds before moving to the next step. You should see something like the following output:
2020/10/06 07:20:49 Found deprecated or removed Kubernetes API: "apiVersion: extensions/v1beta1 kind: Deployment" Supported API equivalent: "apiVersion: apps/v1 kind: Deployment"
.gitlab-ci.ymlto the previous version. You no longer need to include the supplemental template
Continue the deployments as usual.
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached
As announced in the official CNCF blog post, the stable Helm chart repository was deprecated and removed on November 13th, 2020. You may encounter this error after that date.
Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them to use new official repositories or the Helm Stable Archive repository maintained by GitLab. Auto Deploy contains an example fix.
In Auto Deploy,
auto-deploy-image no longer adds the deprecated stable repository to
helm command. If you use a custom chart and it relies on the deprecated stable repository,
specify an older
auto-deploy-image like this example:
include: - template: Auto-DevOps.gitlab-ci.yml .auto-deploy: image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.5"
Keep in mind that this approach stops working when the stable repository is removed, so you must eventually fix your custom chart.
To fix your custom chart:
In your chart directory, update the
repositoryvalue in your
requirements.yamlfile from :
In your chart directory, run
helm dep update .using the same Helm major version as Auto DevOps.
Commit the changes for the
If you previously had a
requirements.lockfile, commit the changes to the file. If you did not previously have a
requirements.lockfile in your chart, you do not need to commit the new one. This file is optional, but when present, it's used to verify the integrity of the downloaded dependencies.
You can find more information in issue #263778, "Migrate PostgreSQL from stable Helm repository".
Error: release .... failed: timed out waiting for the condition
When getting started with Auto DevOps, you may encounter this error when first deploying your application:
INSTALL FAILED PURGING CHART Error: release staging failed: timed out waiting for the condition
This is most likely caused by a failed liveness (or readiness) probe attempted during the deployment process. By default, these probes are run against the root page of the deployed application on port 5000. If your application isn't configured to serve anything at the root page, or is configured to run on a specific port other than 5000, this check fails.
If it fails, you should see these failures in the events for the relevant Kubernetes namespace. These events look like the following example:
LAST SEEN TYPE REASON OBJECT MESSAGE 3m20s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Readiness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused 3m32s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Liveness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
To change the port used for the liveness checks, pass custom values to the Helm chart used by Auto DevOps:
Create a directory and file at the root of your repository named
Populate the file with the following content, replacing the port values with the actual port number your application is configured to use:
service: internalPort: <port_value> externalPort: <port_value>
Commit your changes.
After committing your changes, subsequent probes should use the newly-defined ports.
The page that's probed can also be changed by overriding the
readinessProbe.path values (shown in the
file) in the same fashion.