CLI Capture for IT/DevOps Use

CLI Capture Command

The CLI Capture command is described briefly in the CLI doc with instructions for how to install the CLI and run it command manually. This doc expands on this feature set to give more OS-level and DevOps use cases for the capture.

Additionally, further OS-level examples are given in the OS/Shell doc which may have some overlap with the devops tools examples described here

Deploying the Airbrake Binary with Ansible

To wrap DevOps related commands with Airbrake Capture, you will need a copy of the Airbrake CLI binary on the same system as the process you want to capture. One way to do this is use Ansible to install the CLI on that remote system.

This section explains how to:

  • Create an Ansible role
  • Configure a template for using the Airbrake CLI
  • Use the Ansible role in a playbook

First, create a group_vars file such as airbrake_cli, optionally secured with ansible-vault, with the vars:

airbrake_cli_project_id: YOUR_AIRBRAKE_PROJECT_ID
airbrake_cli_project_key: YOUR_AIRBRAKE_PROJECT_KEY

Create a role airbrake_cli (usually just a dir under roles).

Add a file tasks/main.yaml for the role:

- name: download and install airbrake-cli
    src:{{ cli_version }}/airbrake_{{ cli_version }}_linux_x86_64.tar.gz
    dest: /usr/local/bin
    remote_src: yes
    mode: 0755

Add a file defaults/main.yaml for the role:

cli_version: 1.2.6

Be sure to use the latest versaion available from the GitHub airbrake-cli releases page.

Create an install_airbrake.yaml playbook, or add the role and vars in existing playbooks:

- {role: some_role, tags: [some_tag]}
- {role: airbrake_cli, tags: [cli]}

- hosts: some_hosts
  become: yes
    - "{{ env }}/group_vars/some_vars"
    - "{{ env }}/group_vars/airbrake_cli"

Your ansible directory structure should look something like:

|- roles/
   |- airbrake_cli/
      |- tasks/
         |- main.yaml
      |- vars/
         |- main.yaml
|- install_airbrake.yaml

After which you should be able to run:

ansible-playbook install_airbrake.yaml

Or using the alias you created with instructions from the CLI doc.

Capturing Docker Container Outputs

Entry points

Using airbrake capture with docker entrypoints


To add airbrake CLI to a docker image, utilize a Dockerfile like the one below:

FROM debian:latest
RUN apt-get update && apt-get install wget -y
RUN wget --quiet
RUN tar -xvzf airbrake_1.2.2_linux_x86_64.tar.gz -C /usr/local/bin
RUN rm airbrake_1.2.2_linux_x86_64.tar.gz
ENTRYPOINT ["airbrake", "capture"]

This adds the binary for the airbrake CLI to the working container and sets the default entrypoint to be airbrake capture. You can then build the image with docker build -t <image>:<tag>. After building you can run the image with docker run <image>:<tag> and add the necessary flags for the airbrake capture command to run, --project-id and --project-key such as docker run <image>:<tag> --project-id <PROJECT_ID> --project-key <PROJECT_KEY>.

Using with a Kubernetes yaml file

In the examples provided here, job-entrypoint.yaml and job-overwrite-entrypoint.yaml you can see the different ways to utilize the entrypoint by either utilizing the defined entrypoint and adding additional arguments, such as in the above example with docker run, or defining a command which overwrites the default entrypoint with the defined command values.

In job-entrypoint.yaml, the yaml defined adds the --project-id and --project-key needed to know where to send the airbrake capture contents to in a job, which can then be deployed with kubectl apply -f job-entrypoint.yaml:

apiVersion: batch/v1
kind: Job
  name: testing-sentinel-entrypoint
      - name: sentinel-testing-entrypoint
        image: test:test
        args: ["--project-id", "<project_id>", "--project-key", "<project_key>", "--","echo", "'testing'"]
        imagePullPolicy: Never
        restartPolicy: Never
        backoffLimit: 4

In job-overwrite-entrypoint.yaml a command is passed which overwrites the default entrypoint and instead allows us to define the full command:

apiVersion: batch/v1
kind: Job
  name: testing-sentinel-entrypoint-overwrite-job
      - name: sentinel-testing-entrypoing-overwrite
        image: test:test
        command: ["airbrake", "capture", "--project-id", "<project_id>", "--project-key", "<project_key>", "--", "echo", "'testing'"]
        imagePullPolicy: Never
        restartPolicy: Never
        backoffLimit: 4
Utilizing in a helm chart

With the above concepts, we can easily create a helm chart and variabalize the --project-id and the --project-key in the values file with a template such as the following job.yaml:

apiVersion: batch/v1
kind: Job
  name: {{ .Release.Name }}
  app: sentinel-testing
      - name: {{ .Release.Name }}
        image: test:test
        {{- if .Values.entrypointOverwrite }}
        command: ["airbrake", "capture", "--project-id", "{{ }}", "--project-key", "{{ .Values.project.key }}", "--"]
        {{- end }}
        args: ["echo", "'testing'"]
        imagePullPolicy: Never
        restartPolicy: Never
        backoffLimit: 4

Creating Custom Containers

If you’re already creating a custom container image for your app, it’s super easy to add the airbrake binary to the image. First make sure you have the airbrake binary

wget --quiet
tar -xvzf airbrake_1.2.3_linux_x86_64.tar.gz

And then add this to your Dockerfile, and modify the CMD line accordingly:

COPY airbrake /airbrake
CMD /airbrake echo your command here

Using Existing/Community Containers in Helm

If you run your container in kubernetes, use helm to install the service, and don’t want to rebuild your container to include the airbrake binary, or want to use existing community containers, the simplest solution is to use an init container which does have the binary and the cp command and copy it into a shared volume.

In the helm templage:

  - name: init-{{ .Chart.Name }}
    image: ""
    imagePullPolicy: IfNotPresent
      - "/bin/sh"
      - "-cx"
      - "cp /usr/local/bin/airbrake /airbrake/ && ls -al /airbrake && cp /usr/local/etc/ssl/certs/* /etc/ssl/certs/ && ls -al /etc/ssl/certs"
      privileged: true
      readOnlyRootFilesystem: false
      runAsNonRoot: false
      - name: airbrake
        mountPath: /airbrake
      - name: certs
        mountPath: /etc/ssl/certs

This will use the image provided by airbrake. If you would like to use your own, you can use the instructions in the previous section. Be sure to use busybox (or another image which contains /bin/sh) as the base image on the FROM line.

Instrumenting Kubernetes Cronjobs using Helm

This example is a real use case of an internal kubernetes cronjob airbrake uses called the pruner. We have a values file which contains these lines:

  projectId: <id>
  projectKey: <key>

And a template job.yaml which we modified the args section from:

      - "/prune"
      - "other"
      - "args"


    {{- if  .Values.cronjob.projectId }}
      - "/airbrake"
      - "--project-id"
      - "{{ .Values.cronjob.projectId }}"
      - "--project-key"
      - "{{ .Values.cronjob.projectKey }}"
      - "capture"
      - "--"
    {{- end }}
      - "/prune"
      - "other"
      - "args"

Using the if notation as so allows the job to run with or without an airbrake project key/id, in case it’s not necessary in some environments.

Wrapping CodeBuild Jobs

CodeBuild is often configured with a buildspec. CodeBuild essentially runs the build job in a container, so you will need to download the airbrake binary as a pre-build task in addition to wrapping it, such as:

      - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
      - wget -q
      - tar xzf airbrake_1.2.3_linux_x86_64.tar.gz && mv airbrake /usr/local/bin/airbrake
    on-failure: ABORT
      - airbrake capture --project-id $AB_PROJECT_ID --project-key $AB_PROJECT_KEY –-message $CODEBUILD_BUILD_ARN make build push
      - make triggerstaging

Note that the --message flag will always use the codebuild arn, which will always be the same for the job. This will group these captures under the same job, no matter how the output changes.