Using Bazel with Rust, gRPC, protobuf, and Docker

NOTE: Buck2 was recently released and part of the motivation for this doc is providing a simple example of using Bazel and then replicating it with Buck2 to learn how the two compare.

This doc walks through creating a rust library (crate), rust binary, unit test, docker image, and Google Cloud run service using Bazel. When I started using Bazel for Rust that I ran into less roadblocks than I expected, but there were lots of speedbumps and I hope this walk-through helps with that.

Bazel provides lots of flexability and in this doc I make certain choices on conventions for directory structure and crate naming that have worked for me, but they aren't the only way to do things and encourage you to choose what feels best to you.

This doc won't cover when or why to use monorepos. There's a lot written on that. If cargo is working great for you, you probably don't need Bazel or Buck2. If you think you might want to explore that, this doc shows you how to get a functional setup in place.

If you find any issues with this book please open a bug report at https://github.com/Heeten/hello-monorepo-bazel/issues

Prerequisites and Installing Bazel

I'm going to start of with a minimal VM built on Google Cloud's Platform and walk through the process of installing and running Bazel on it. I do this so that these instructions are reproducible for people to follow along with and I don't accidently miss a step (like installing a C++ toolchain).

Feel free to jump down to Step X which covers installing Bazelisk.

Step 0: Building our VM and connecting to it

I'm going to use the gcloud CLI (gcloud CLI instalation instructions) to setup a VM on Google Cloud. If you already have a linux machine you can skip this.

Log into Google Cloud and set the project

gcloud auth login
gcloud config set project [PROJECT_ID]

Create a debian instance (Bazel of course works on all kinds of different environments, this is just the one I use for this doc).

gcloud compute instances create hellobazel \
  --image-family debian-11 \
  --image-project debian-cloud \
  --zone us-central1-a \
  --machine-type e2-standard-2 \
  --boot-disk-size 20GB

Now lets connect to it

gcloud compute ssh hellobazel

Step 1: Downloading bazelisk and running

Now that we have a new machine we can mess around with, let's install Bazelisk.

You can find the official Bazel install docs at https://bazel.build/install. Those mention Bazelisk is the recommended way to install Bazel, which is what we'll use. Bazelisk is available as a binary you can download from their GitHub release page. We're going to pull down the v1.16.0 binary by running:

mkdir bazel
cd bazel
curl \
  -L https://github.com/bazelbuild/bazelisk/releases/download/v1.16.0/bazelisk-linux-amd64 \
  -o bazel

Now let's make the file we just downloaded executable

chmod +x bazel

And add it to our path

export PATH="${PATH}:$HOME/bazel"

Finally lets run bazel and see what happens. It should download something and then give you the Usage. Here's what I get

$ bazel
2023/04/10 17:25:16 Downloading https://releases.bazel.build/6.1.1/release/bazel-6.1.1-linux-x86_64...
WARNING: Invoking Bazel in batch mode since it is not invoked from within a workspace (below a directory having a WORKSPACE file).
Extracting Bazel installation...
                                                           [bazel release 6.1.1]
Usage: bazel <command> <options> ...

Available commands:
  analyze-profile     Analyzes build profile data.
  aquery              Analyzes the given targets and queries the action graph.
  build               Builds the specified targets.
  canonicalize-flags  Canonicalizes a list of bazel options.
  clean               Removes output files and optionally stops the server.
  coverage            Generates code coverage report for specified test targets.
  cquery              Loads, analyzes, and queries the specified targets w/ configurations.
  dump                Dumps the internal state of the bazel server process.
  fetch               Fetches external repositories that are prerequisites to the targets.
  help                Prints help for commands, or the index.
  info                Displays runtime info about the bazel server.
  license             Prints the license of this software.
  mobile-install      Installs targets to mobile devices.
  modquery            Queries the Bzlmod external dependency graph
  print_action        Prints the command line args for compiling a file.
  query               Executes a dependency graph query.
  run                 Runs the specified target.
  shutdown            Stops the bazel server.
  sync                Syncs all repositories specified in the workspace file
  test                Builds and runs the specified test targets.
  version             Prints version information for bazel.

Getting more help:
  bazel help <command>
                   Prints help and options for <command>.
  bazel help startup_options
                   Options for the JVM hosting bazel.
  bazel help target-syntax
                   Explains the syntax for specifying targets.
  bazel help info-keys
                   Displays a list of keys used by the info command.

Now that we have bazel working, let's start using it in the next chapter!

Create repository and bazel workspace

One of the benefits of Bazel is when you have a lot of different components/modules/crates/packages/whatever-you-call-it that need to connect together. In our example we're going to imagine there's a team that owns a rust summation() function and that they provide a rust interface and rust CLI to call these functions. Then we'll have downstream dependencies in C++ and Rust that use this summation() function.

NOTE: There are different ways to organize a monorepo. Bazel doesn't care which way you use, but sometimes IDEs do. In this example repo, we organize directories by project/team ownership and mix source files from different languages together. An alternate is to have the top-level directories be split by language (c++, rust, py, etc) or by project ang language.

Create repository

We'll put our repository in the $HOME/repo. Let's go ahead and make that

mkdir $HOME/repo
cd $HOME/repo

Create workspace

Bazel has a concept of workspace which I think of as the root of the monorepo. In the repo directory we'll create a file called WORKSPACE which we'll put repo wide configuration in for pulling down external dependencies. In this chapter the main external dependency we'll need is pulling down rules_rust which contains the bazel rules for how to go from rust source files to libraries (crates) and binaries.

Go ahead and open up $HOME/repo/WORKSPACE in your favorite text editors1 and put this in there. Bazel uses the Starlark language for configuration files, which is a dialect of Python.

# This command tells bazel to load the http_archive rule which is used
# to download other rulesets like rules_rust below
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# This pulls down rules_rust
# The path and sha256 come from https://github.com/bazelbuild/rules_rust/releases
http_archive(
    name = "rules_rust",
    sha256 = "950a3ad4166ae60c8ccd628d1a8e64396106e7f98361ebe91b0bcfe60d8e4b60",
    urls = ["https://github.com/bazelbuild/rules_rust/releases/download/0.20.0/rules_rust-v0.20.0.tar.gz"],
)

#What to load and run to setup rust are documented at https://bazelbuild.github.io/rules_rust/
load("@rules_rust//rust:repositories.bzl", "rules_rust_dependencies", "rust_register_toolchains")

rules_rust_dependencies()

rust_register_toolchains()

Once we've saved that file if we run bazel again we should see it doesn't output the WARNING: Invoking Bazel in batch mode... line.

$ bazel
Starting local Bazel server and connecting to it...
                                                           [bazel release 6.1.1]
Usage: bazel <command> <options> ...

Available commands:
  analyze-profile     Analyzes build profile data.
  aquery              Analyzes the given targets and queries the action graph.
  build               Builds the specified targets.
  canonicalize-flags  Canonicalizes a list of bazel options.
  clean               Removes output files and optionally stops the server.
  coverage            Generates code coverage report for specified test targets.
  cquery              Loads, analyzes, and queries the specified targets w/ configurations.
  dump                Dumps the internal state of the bazel server process.
  fetch               Fetches external repositories that are prerequisites to the targets.
  help                Prints help for commands, or the index.
  info                Displays runtime info about the bazel server.
  license             Prints the license of this software.
  mobile-install      Installs targets to mobile devices.
  modquery            Queries the Bzlmod external dependency graph
  print_action        Prints the command line args for compiling a file.
  query               Executes a dependency graph query.
  run                 Runs the specified target.
  shutdown            Stops the bazel server.
  sync                Syncs all repositories specified in the workspace file
  test                Builds and runs the specified test targets.
  version             Prints version information for bazel.

Getting more help:
  bazel help <command>
                   Prints help and options for <command>.
  bazel help startup_options
                   Options for the JVM hosting bazel.
  bazel help target-syntax
                   Explains the syntax for specifying targets.
  bazel help info-keys
                   Displays a list of keys used by the info command.
1

I installed emacs on the vm at this point with sudo apt-get install emacs-nox

Rust Hello World

First we'll create a "hello world" rust binary. Then we'll create the rust library and switch our rust binary to call it. We'll also create a unit test for our rust library.

Hello world rust binary

Once we're done this chapter, this binary will print out a random number gint from 0 to a user provided arg. We'll start with just making sure we can build a rust binary and print hello world!.

We're going to put all the summation related code in the directory $HOME/repo/src/summation, let's go ahead and make and cd that:

mkdir -p "${HOME}/repo/src/summation"
cd "${HOME}/repo/src/summation"

BUILD files

Bazel uses BUILD files to describe all the "targets" (things that get built) in a directory, and the rules for how to build them. The rules_rust rust_binary rule provides options you can set to control how things are built.

WARNING: Bazel aims for hermetic builds; code is built in a sandbox and configuration is stored in BUILD files, not environment variables. To pass rustc environment variables you set them in the BUILD file since setting environment variables won't pass through to the sandbox

Open up $HOME/repo/src/summation/BUILD in your favorite text editor and lets setup the build rules for our binary.

# This tells bazel to load the rust_binary rule from the rules_rust package
load("@rules_rust//rust:defs.bzl", "rust_binary")

rust_binary(
    #We are going to call the target/binary summation
    name = "executable",
    #The list of src files it needs (just main.rs)
    srcs = ["main.rs"],
    #Any libraries/crates it depends on, for now we'll leave this blank
    deps = [],
    #The crate_root file, this would default to main.rs but we put it in for clarity
    crate_root = "main.rs",
)

Let's also create our main.rs file:

fn main() {
    println!("Hello world");
}

Now lets try to build it by running:

bazel build :executable

And it fails! If you get what I got you'll see something like:

$ bazel build :executable
INFO: Repository local_config_cc instantiated at:
  /DEFAULT.WORKSPACE.SUFFIX:509:13: in <toplevel>
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/cc_configure.bzl:184:16: in cc_configure
Repository rule cc_autoconf defined at:
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/cc_configure.bzl:143:30: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_config_cc':
   Traceback (most recent call last):
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/cc_configure.bzl", line 125, column 33, in cc_autoconf_impl
                configure_unix_toolchain(repository_ctx, cpu_value, overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 349, column 17, in configure_unix_toolchain
                cc = find_cc(repository_ctx, overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 314, column 23, in find_cc
                cc = _find_generic(repository_ctx, "gcc", "CC", overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 310, column 32, in _find_generic
                auto_configure_fail(msg)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/lib_cc_configure.bzl", line 112, column 9, in auto_configure_fail
                fail("\n%sAuto-Configuration Error:%s %s\n" % (red, no_color, msg))
Error in fail:
Auto-Configuration Error: Cannot find gcc or CC; either correct your path or set the CC environment variable
ERROR: /DEFAULT.WORKSPACE.SUFFIX:509:13: fetching cc_autoconf rule //external:local_config_cc: Traceback (most recent call last):
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/cc_configure.bzl", line 125, column 33, in cc_autoconf_impl
                configure_unix_toolchain(repository_ctx, cpu_value, overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 349, column 17, in configure_unix_toolchain
                cc = find_cc(repository_ctx, overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 314, column 23, in find_cc
                cc = _find_generic(repository_ctx, "gcc", "CC", overriden_tools)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/unix_cc_configure.bzl", line 310, column 32, in _find_generic
                auto_configure_fail(msg)
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/cpp/lib_cc_configure.bzl", line 112, column 9, in auto_configure_fail
                fail("\n%sAuto-Configuration Error:%s %s\n" % (red, no_color, msg))
Error in fail:
Auto-Configuration Error: Cannot find gcc or CC; either correct your path or set the CC environment variable
INFO: Repository rust_linux_x86_64__x86_64-unknown-linux-gnu__stable_tools instantiated at:
  /home/parallels/repo/WORKSPACE:18:25: in <toplevel>
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/repositories.bzl:203:14: in rust_register_toolchains
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/bazel_tools/tools/build_defs/repo/utils.bzl:233:18: in maybe
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/repositories.bzl:874:65: in rust_repository_set
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/repositories.bzl:496:36: in rust_toolchain_repository
Repository rule rust_toolchain_tools_repository defined at:
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/repositories.bzl:333:50: in <toplevel>
ERROR: /home/parallels/repo/src/summation/BUILD:4:12: //src/summation:executable depends on @local_config_cc//:cc-compiler-k8 in repository @local_config_cc which failed to fetch. no such package '@local_config_cc//':
Auto-Configuration Error: Cannot find gcc or CC; either correct your path or set the CC environment variable
ERROR: Analysis of target '//src/summation:executable' failed; build aborted: Analysis failed
INFO: Elapsed time: 8.170s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (93 packages loaded, 187 targets configured)
    Fetching https://static.rust-lang.org/dist/rustc-1.68.1-x86_64-unknown-linux-gnu.tar.gz

For most dependencies, you'll tell Bazel where to find them and it'll pull them down for you. One exception is the C++ toolchain, which rules_rust depends on. To get around this we can install the build-essential package on debian which includes gcc, g++, and libc.

sudo apt-get install build-essential

Once you've installed that let's try bazel build :executable again and see what happens

$ bazel build :executable
ERROR: /home/parallels/repo/src/summation/BUILD:4:12: in rust_binary rule //src/summation:executable:
Traceback (most recent call last):
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/private/rust.bzl", line 351, column 34, in _rust_binary_impl
                edition = get_edition(ctx.attr, toolchain, ctx.label),
        File "/home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/external/rules_rust/rust/private/rust.bzl", line 125, column 13, in get_edition
                fail("Attribute `edition` is required for {}.".format(label))
Error in fail: Attribute `edition` is required for @//src/summation:executable.
ERROR: /home/parallels/repo/src/summation/BUILD:4:12: Analysis of target '//src/summation:executable' failed
ERROR: Analysis of target '//src/summation:executable' failed; build aborted:
INFO: Elapsed time: 17.253s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (9 packages loaded, 325 targets configured)

This time it fails again saying we didn't set the edition. We could manually set the edition on the rule, but that's kind of annoying if we want to use the same edition across the repo, so lets open up our $HOME/repo/WORKSPACE file and specify an edition on the rust_register_toolchains() call by changing it to:

rust_register_toolchains(edition = "2021")

Hopefully the third time is a charm? Let's see what bazel build :executable does this time:

$ bazel build :executable
INFO: Analyzed target //src/summation:executable (1 packages loaded, 60 targets configured).
INFO: Found 1 target...
Target //src/summation:executable up-to-date:
  bazel-bin/src/summation/executable
INFO: Elapsed time: 16.776s, Critical Path: 5.44s
INFO: 94 processes: 91 internal, 3 linux-sandbox.
INFO: Build completed successfully, 94 total actions

Success! Now before we get up for a coffee break lets just make sure it actually runs. You can use the bazel run subcommand to run the binary.

$ bazel run :executable
INFO: Analyzed target //src/summation:executable (24 packages loaded, 172 targets configured).
INFO: Found 1 target...
Target //src/summation:executable up-to-date:
  bazel-bin/src/summation/executable
INFO: Elapsed time: 0.455s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/src/summation/executable
Hello world

It worked! One thing to note is this ran it inside bazel, and it output a bunch of bazel log messages. The second to last line of the output says INFO: Running command line: bazel-bin/src/summation/executable.

What is that? Let's go to the repo directory and see:

$ ls -l $HOME/repo
total 28
-rw-r--r-- 1 parallels parallels  798 Apr 20 14:40 WORKSPACE
-rw-r--r-- 1 parallels parallels  782 Apr 20 14:36 WORKSPACE~
lrwxrwxrwx 1 parallels parallels  123 Apr 20 14:40 bazel-bin -> /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__/bazel-out/k8-fastbuild/bin
lrwxrwxrwx 1 parallels parallels  106 Apr 20 14:40 bazel-out -> /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__/bazel-out
lrwxrwxrwx 1 parallels parallels   96 Apr 20 14:40 bazel-repo -> /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__
lrwxrwxrwx 1 parallels parallels  128 Apr 20 14:40 bazel-testlogs -> /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__/bazel-out/k8-fastbuild/testlogs
drwxr-xr-x 3 parallels parallels 4096 Apr 20 14:37 src

You can see bazel created a bunch of symlinks to a mysterious .cache/bazel directory. When you run bazel, it caches build artifacts to avoid rebuilding things that didn't change, and these symlinks give us a way to access the artifacts bazel produces. If we want, we can run the binary directly by running $HOME/repo/bazel-bin/src/summation/executable. Let's try that:

$ $HOME/repo/bazel-bin/src/summation/executable
Hello world

Now we see the output of the binary without any of the bazel messages because we are invoking it directly.

With this we've successfully configured bazel to compile rust and get a binary out. When we're back from our coffee break we'll add in a C++ library, make a rust crate that links to it, and make our Hello world program call the rust crate.

Rust library

Now that we have hello world let's do something more real by creating a library (crate), unit testing it, and making our binary call it.

Creating the library

We're going to use multiple files for our trivial library to demonstrate how that is handled in Bazel.

One thing we'll need to think about is what we want the crate name to be. In the C++ code above you can see that the header file location is based on the path to the file in the repo. If we had Java files, we'd also see this there, where package names are nested and based on their path.

To make something that resembles this for Rust crate names in the monorepo I've decided to name my crates based on the path to them. I also have them all have the same top-level prefix to ensure I avoid classes with third-party crates. This also makes it easy for me to see a crate name in a source file (like in a use statement) and know where to find it in the monorepo.

Having settled the naming delima let's add the library to $HOME/repo/src/summation/BUILD by adding these lines to the file (the name attribute is what the crate name defaults to):

load("@rules_rust//rust:defs.bzl", "rust_library")
rust_library(
    name = "src_summation",
    srcs = [
        "lib.rs",
        "f64.rs",
        "u32.rs",
    ],
    deps = [],
)

We have to list all the files we want to compile against here. Otherwise Bazel won't copy them into the sandbox where our library is compiled. I like listing all the files explictly, but if you want to include all "*.rs" files Bazel provides a glob() to do this.

Now let's make $HOME/repo/src/summation/lib.rs:

#![allow(unused)]
fn main() {
pub mod f64;
pub mod u32;
}

If we don't have the mod lines, when bazel runs rustc it'll ignore the f64.rs and u32.rs files since rustc uses the crate root source file to figure out what to compile. Including them in the BUILD file gets them copied over to the sandbox rustc is run in, adding them to lib.rs gets rustc to compile them.

And lets make $HOME/repo/src/summation/f64.rs:

#![allow(unused)]
fn main() {
pub fn summation_f64(values: &[f64]) -> f64 {
    values.iter().sum()
}
}

Wow, that's a boring function. Clearly I picked a simple example. Let's make a boring $HOME/repo/src/summation/u32.rs file as well:

#![allow(unused)]
fn main() {
pub fn summation_u32(values: &[u32]) -> u32 {
    values.iter().sum()
}
}

Now let's build it. From anywhere in the workspace we can build this using it's full path:

bazel build //src/summation:src_summation

The // maps to the root of the workspace which is $HOME/repo in our example, //src/summation says we are talking about that path from the workspace root, and then src_summation is the target inside the build file that we are trying to build. If we're already in $HOME/repo/src/summation we can omit the path and just use bazel build :src_summation for short.

We can also run bazel build :all to build all the targets in the directory we are in. This should be a no-op if you manually built the executable and lib already since none of the source files have changed so bazel just uses the cached build and doesn't need to remake them.

Unit test rust library

To add tests to our rust library we'll use the rules_rust rust_test rule. We can test in our library source files or seperate out testing into their own files. In this case since our unit tests are simple we'll put them in the same source files as the functions they are testing.

Let's add the rule to $HOME/repo/src/summation/BUILD by adding these lines:

load("@rules_rust//rust:defs.bzl", "rust_test")
rust_test(
    name = "lib_test",
    crate = ":src_summation",
    deps = [],
)

We can also combine all the load lines into one, consolidating them into this at the top of BUILD:

load("@rules_rust//rust:defs.bzl", "rust_binary", "rust_library", "rust_test")

Now we can run it. Bazel can automatically detect and rerun all the applicable tests that might be affected by a change, and I usually run bazel test //... which says run all the tests when I'm testing. If you run it you should see something like

INFO: Analyzed 3 targets (2 packages loaded, 156 targets configured).
INFO: Found 2 targets and 1 test target...
INFO: Elapsed time: 0.874s, Critical Path: 0.34s
INFO: 5 processes: 2 internal, 3 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//src/summation:lib_test                                                 PASSED in 0.0s

Executed 1 out of 1 test: 1 test passes.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.

You can see it "ran" our tests but right now we haven't defined any tests so it's a bit of no-op. Let's add some test that we expect to fail and see what happens. We'll add this to $HOME/repo/src/summation/f64.rs

#![allow(unused)]
fn main() {
pub fn summation_f64(values: &[f64]) -> f64 {
    values.iter().sum()
}

#[cfg(test)]
mod test {
    use super::summation_f64;

    #[test]
    fn simple_test() {
        let res = summation_f64(&[0.0, 1.0,  2.0]);
        assert_eq!(res, 0.0);
    }
}
}

And run bazel test //... again:

INFO: Analyzed 3 targets (0 packages loaded, 0 targets configured).
INFO: Found 2 targets and 1 test target...
FAIL: //src/summation:lib_test (see /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__/bazel-out/k8-fastbuild/testlogs/src/summation/lib_test/test.log)
INFO: Elapsed time: 0.604s, Critical Path: 0.40s
INFO: 4 processes: 4 linux-sandbox.
INFO: Build completed, 1 test FAILED, 4 total actions
//src/summation:lib_test                                                 FAILED in 0.0s
  /home/parallels/.cache/bazel/_bazel_parallels/db6a46b6510c6ee4dba1a9500854830b/execroot/__main__/bazel-out/k8-fastbuild/testlogs/src/summation/lib_test/test.log

Executed 1 out of 1 test: 1 fails locally.

Ok we can see our test failed. Bazel also gives us the path to the test.log file. If we look at that it contains:

exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //src/summation:lib_test
-----------------------------------------------------------------------------

running 1 test
test f64::test::simple_test ... FAILED

failures:

---- f64::test::simple_test stdout ----
thread 'f64::test::simple_test' panicked at 'assertion failed: `(left == right)`
  left: `3.0`,
 right: `0.0`', src/summation/f64.rs:12:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    f64::test::simple_test

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

So we can see our test failed because 3.0 does not equal 0.0. Let's fix the test by changing the assert_eq!(res, 0.0); to assert_eq!(res, 3.0); and rerun:

$ bazel test //...
INFO: Analyzed 3 targets (0 packages loaded, 0 targets configured).
INFO: Found 2 targets and 1 test target...
INFO: Elapsed time: 0.595s, Critical Path: 0.42s
INFO: 4 processes: 4 linux-sandbox.
INFO: Build completed successfully, 4 total actions
//src/summation:lib_test                                                 PASSED in 0.0s

Executed 1 out of 1 test: 1 test passes.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.

Looks like our tests are working! Next lets make the CLI call our library.

Link rust binary to library

Now that we have our library, lets make our rust binary use it.

First we'll update the $HOME/repo/src/summation/BUILD and add ":src_summation" to the rust_binary deps, which tells Bazel to pull that crate into the sandbox our target is built in. The full BUILD file after this will look like:

load("@rules_rust//rust:defs.bzl", "rust_binary", "rust_library", "rust_test")

rust_binary(
    #We are going to call the target/binary summation
    name = "executable",
    #The list of src files it needs (just main.rs)
    srcs = ["main.rs"],
    #Any libraries/crates it depends on, for now we'll leave this blank
    deps = [
        ":src_summation",
    ],
    #The crate_root file, this would default to main.rs but we put it in for clarity
    crate_root = "main.rs",
)

rust_library(
    name = "src_summation",
    srcs = [
        "lib.rs",
        "f64.rs",
        "u32.rs",
    ],
    deps = [],
)

rust_test(
    name = "lib_test",
    crate = ":src_summation",
    deps = [],
)

Next let's update $HOME/repo/src/summation/main.rs to use our crate. We'll have it parse command line arguments as f64 and then sum all of them.

use src_summation::f64::summation_f64;
use std::env;

fn main() {
    let args: Vec<f64> = env::args().skip(1).map(|a| a.parse().unwrap()).collect();
    println!("sum = {}", summation_f64(&args))
}

Now lets build and run it:

$ bazel run //src/summation:executable
INFO: Analyzed target //src/summation:executable (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //src/summation:executable up-to-date:
  bazel-bin/src/summation/executable
INFO: Elapsed time: 0.405s, Critical Path: 0.28s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
INFO: Running command line: bazel-bin/src/summation/executable
sum = 0

Let's try running it with some arguments. We'll use -- to seperate arguments to bazel run and arguments to our binary (we'll also omit some bazel log statements):

$ bazel run //src/summation:executable -- 1 2 3.0
sum = 6

Finally, let's run with an optimized binary by adding -c opt, getting something equivilant to running with --release in cargo:

$ bazel run -c opt //src/summation:executable -- 1 2 3.0
INFO: Build option --compilation_mode has changed, discarding analysis cache.
INFO: Analyzed target //src/summation:executable (0 packages loaded, 517 targets configured).
INFO: Found 1 target...
Target //src/summation:executable up-to-date:
  bazel-bin/src/summation/executable
INFO: Elapsed time: 1.278s, Critical Path: 0.55s
INFO: 48 processes: 46 internal, 2 linux-sandbox.
INFO: Build completed successfully, 48 total actions
INFO: Running command line: bazel-bin/src/summation/executable 1 2 3.0
sum = 6

'Release mode' builds

In bazel instead of passing --release you need to set the compilation-mode, which is a command-line option you can set that affects both the rust and c++ compilation in our example repo. When you run bazel build or bazel run you'll want to use this flag to get compiler optimizations turned on.

#Compiles everything with optimization enabled
bazel build -c opt //...
#Runs out example CLI from a build with optimization enabled
bazel run -c opt //src/summation:executable

Pulling in external crates like Clap

We'll use cargo raze. There's a newer alternate way to pull crates.io crates into bazel using crate universe that we don't cover here. With cargo raze you create a Cargo.toml file to specify the crates.io crates you will depend on and use the raze extension to create Bazel rules for compiling each of these. The dependencies can be vendored, but we'll use the non-vendored mode in the examples below.

If you're starting from our blank VM we'll need to install rust and cargo. Using The Cargo Book instructions:

curl https://sh.rustup.rs -sSf | sh

Add cargo to the path:

source "$HOME/.cargo/env"

Next lets install cargo raze

cargo install cargo-raze

If you're using the VM we setup, that failed saying The pkg-config command could not be found and also complaining about open ssl. Let's fix that and try again:

sudo apt install pkg-config libssl-dev
cargo install cargo-raze

Now that we have Cargo and cargo-raze, lets put our third-party rust dependencies under the //third_party/rust path in our repo. First let's make that directory:

mkdir $HOME/repo/third_party
mkdir $HOME/repo/third_party/rust

Now lets make $HOME/repo/third_party/rust/Cargo.toml and follow the instructions for this:

[package]
name = "compile_with_bazel"
version = "0.0.0"

# Mandatory (or Cargo tooling is unhappy)
[lib]
path = "fake_lib.rs"

[dependencies]
log = "0.4.17"

[package.metadata.raze]
# The path at which to write output files.
#
# `cargo raze` will generate Bazel-compatible BUILD files into this path.
# This can either be a relative path (e.g. "foo/bar"), relative to this
# Cargo.toml file; or relative to the Bazel workspace root (e.g. "//foo/bar").
workspace_path = "//third_party/rust"

# This causes aliases for dependencies to be rendered in the BUILD
# file located next to this `Cargo.toml` file.
package_aliases_dir = "."

# The set of targets to generate BUILD rules for.
targets = [
    "x86_64-unknown-linux-gnu",
]

# The two acceptable options are "Remote" and "Vendored" which
# is used to indicate whether the user is using a non-vendored or
# vendored set of dependencies.
genmode = "Remote"

default_gen_buildrs = true

Now from the $HOME/repo/third_party/rust directory run cargo raze

cd $HOME/repo/third_party/rust
cargo raze

For some reason on my VM cargo raze failed and looking at the cargo-raze code it seems to be because a dummy directory is missing. I fixed this by running mkdir -p "/tmp/cargo-raze/doesnt/exist/" and then running cargo raze again.

This should create a few different files in that directory, and a remote directory. The $HOME/repo/third_party/rust/BUILD.bazel file creates a new :log target which allows you to depend on the log crate. We can add //third_party/rust:log to the deps attribute of our rust_library and rust_binary rules to pull in the log crate.

We also need to update $HOME/repo/WORKSPACE to pull down the remote crates. Add this to the bottom of WORKSPACE:

### Cargo raze deps
###
load("//third_party/rust:crates.bzl", "raze_fetch_remote_crates")

# Note that this method's name depends on your gen_workspace_prefix setting.
# `raze` is the default prefix.
raze_fetch_remote_crates()

Let's add log to our library by editing $HOME/repo/src/summation/BUILD and updating the rust_library deps to say:

rust_library(
    name = "src_summation",
    srcs = [
        "lib.rs",
        "f64.rs",
        "u32.rs",
    ],
    deps = ["//third_party/rust:log"],
)

Now lets use log in f64.rs by changing the top of the file to this:

#![allow(unused)]
fn main() {
use log::trace;

pub fn summation_f64(values: &[f64]) -> f64 {
    trace!("summation_f64");
    values.iter().sum()
}
}

Then lets rebuild and see what happens:

$ bazel build //...
INFO: Analyzed 5 targets (5 packages loaded, 43 targets configured).
INFO: Found 5 targets...
INFO: Elapsed time: 2.326s, Critical Path: 1.91s
INFO: 14 processes: 5 internal, 9 linux-sandbox.
INFO: Build completed successfully, 14 total actions

You should see some output showing it's pulling down the third party crates and then everything compiles.

Adding Clap

Now let's add clap. We'll see there's a gotcha we need to deal with for that crate due to Bazel's sandboxing.

First we'll add Clap along with the derive featur to $HOME/repo/third_party/rust/Cargo.toml under the [dependencies] section:

[dependencies]
log = "0.4.17"
clap = { version = "4.2.2", features = ["derive"] }

Then lets rerun cargo raze from $HOME/repo/third_party/rust

cd $HOME/repo/third_party/rust
cargo raze

You might see a warning about needing to run cargo generate-lockfile. We can delete Cargo.raze.lock and rerun cargo raze to update versions of packages and create a new lockfile.

rm Cargo.raze.lock
cargo raze

Now lets go back to $HOME/repo/src/summation/BUILD and add "//third_party/rust:log" to our binary deps, resulting in:

rust_binary(
    #We are going to call the target/binary summation
    name = "executable",
    #The list of src files it needs (just main.rs)
    srcs = ["main.rs"],
    #Any libraries/crates it depends on, for now we'll leave this blank
    deps = [
        ":src_summation",
        "//third_party/rust:clap",
    ],
    #The crate_root file, this would default to main.rs but we put it in for clarity
    crate_root = "main.rs",
)

We'll try to build it before actually updating main.rs to use the code by running

bazel build //...

And it fails with output looking like:

INFO: Analyzed 5 targets (18 packages loaded, 806 targets configured).
INFO: Found 5 targets...
ERROR: /home/parallels/.cache/bazel/_bazel_parallels/8136e33dd0c038f4f223262d62801c45/external/raze__clap_builder__4_2_2/BUILD.bazel:34:13: Compiling Rust rlib clap_builder v4.2.2 (54 files) failed: (Exit 1): process_wrapper failed: error executing command (from target @raze__clap_builder__4_2_2//:clap_builder) bazel-out/k8-opt-exec-2B5CBBC6/bin/external/rules_rust/util/process_wrapper/process_wrapper --arg-file ... (remaining 57 arguments skipped)

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
error: couldn't read external/raze__clap_builder__4_2_2/src/../README.md: No such file or directory (os error 2)
 --> external/raze__clap_builder__4_2_2/src/lib.rs:7:10
  |
7 | #![doc = include_str!("../README.md")]
  |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |
  = note: this error originates in the macro `include_str` (in Nightly builds, run with -Z macro-backtrace for more info)

error: aborting due to previous error

INFO: Elapsed time: 5.310s, Critical Path: 4.04s
INFO: 29 processes: 8 internal, 21 linux-sandbox.
FAILED: Build did NOT complete successfully

What's going on here? It's hard to believe the clap release from crates.io doesn't build, but that's what Bazel tells us. If you look at the error, you see it's using the include_str! macro not being able to find ../README.md. Back in the hello world chapter we mentioned Bazel tries to ensure hermetic builds by compiling code in a sandbox. One goal of the sandbox is to ensure you can't depend on anything that you haven't told Bazel explictly about. By default, cargo raze tells Bazel to bring over all the *.rs files, but it doesn't specify the compile needs README.md. We can set an option to tell it we need this by adding these lines to $HOME/repo/third_party/rust/Cargo.toml:

[package.metadata.raze.crates.clap.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.clap_builder.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.clap_derive.'*']
compile_data_attr = "glob([\"**/*.md\"])"

Then let's run cargo raze again to get it to regen the third party crate build files.

cargo raze

And try building again

bazel build //...

At this point it should have built and you should be able to run your executable:

bazel run //src/summation:executable -- 0.0 1.0 2.0

That should output sum = 3. For completeness let's open up $HOME/repo/src/summation/main.rs and use clap to parse the args. The final code we'll end up there will be:

use clap::{Parser, Subcommand};
use src_summation::f64::summation_f64;
use src_summation::u32::summation_u32;

#[derive(Subcommand)]
enum Cmd {
    U32 { args: Vec<String> },
    F64 { args: Vec<String> },
}

#[derive(Parser)]
struct Arguments {
    #[command(subcommand)]
    cmd: Cmd,
}

fn main() {
    let args = Arguments::parse();
    match args.cmd {
        Cmd::U32 { args } => {
            let args: Vec<u32> = args.into_iter().map(|a| a.parse().unwrap()).collect();
            println!("sum = {}", summation_u32(&args))
        }
        Cmd::F64 { args } => {
            let args: Vec<f64> = args.into_iter().map(|a| a.parse().unwrap()).collect();
            println!("sum = {}", summation_f64(&args))
        }
    }
}

Which we can run and get the help usage for by running:

bazel run //src/summation:executable -- --help

Which shows us:

WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE.
INFO: Analyzed target //src/summation:executable (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //src/summation:executable up-to-date:
  bazel-bin/src/summation/executable
INFO: Elapsed time: 0.104s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/src/summation/executable --help
Usage: executable <COMMAND>

Commands:
  u32
  f64
  help  Print this message or the help of the given subcommand(s)

Options:
  -h, --help  Print help

Building Rust protobuf

Generating code for protobufs in rust is a lot more complicated than other languages. We'll cover how to do both.

We'll start with other language support since that includes pulling down the protoc compiler which we'll also need for rust.

Creating the protofile

Lets decide where we want to put our protobuf file. Bazel doesn't care where we put it, so I'm going to arbitarly pick //src/proto/summation as the path for it.

Let's put to protobuf file in $HOME/repo/src/proto/summation/summation.proto with these contents

syntax = "proto3";

package src_proto_summation;

service Summation {
  rpc ComputeSumF64(ComputeSumF64Request) returns (ComputeSumF64Response);
}

message ComputeSumF64Request {
  repeated double value = 1;
}

message ComputeSumF64Response {
  double sum = 1;
}

Our package name above is unconventional, we are using underscores instead of dots. This is to match the crate naming convention we adopted for rust libraries.

Adding rules_proto for protobuf generation in protoc supported languages

We're going to use the bazel rules from rules_proto. Let's pull this down by adding this section to our $HOME/repo/WORKSPACE file:

### rules_proto
### Release info from https://github.com/bazelbuild/rules_proto/releases
http_archive(
    name = "rules_proto",
    sha256 = "dc3fb206a2cb3441b485eb1e423165b231235a1ea9b031b4433cf7bc1fa460dd",
    strip_prefix = "rules_proto-5.3.0-21.7",
    urls = [
        "https://github.com/bazelbuild/rules_proto/archive/refs/tags/5.3.0-21.7.tar.gz",
    ],
)
load("@rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()

Next lets make $HOME/repo/src/proto/summation/BUILD:

load("@rules_proto//proto:defs.bzl", "proto_library")

proto_library(
    name = "proto",
    srcs = [
        "summation.proto",
    ],
    visibility = ["//visibility:public"],
)

Finally run bazel build //... to build everything.

Protobuf generation in Rust

We're going to use tonic for our gRPC server and use tonic_build to generate the protobuf and gRPC code.

Pulling in tonic, tonic_build, and prost

Do generate and compile the protobuf and gRPC code we'll need to explictly expose the tonic, tonic_build, and prost crates.

When we pull down tonic we'll enable the tls features, even though we won't use them (yet) in this guide. We'll also need to tell bazel that the compile depends on *.md files for prost, and some other files for other transitive dependencies (which we don't explictly pull down but it an implicit dependency pulled down).

So we'll add the following under [dependencies]:

prost = "0.11.6"
tonic = { version = "0.9.1", features = ["tls", "tls-roots", "default"] }
tonic-build = "0.9.1"

And these to the bottom of the file:

[package.metadata.raze.crates.prost.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.rustls-webpki.'*']
compile_data_attr = "glob([\"**/*.der\"])"

[package.metadata.raze.crates.ring.'*']
compile_data_attr = "glob([\"**/*.der\"])"

[package.metadata.raze.crates.axum.'*']
compile_data_attr = "glob([\"**/*.md\"])"

Our full $HOME/repo/third_party/rust/Cargo.toml will look like:

[package]
name = "compile_with_bazel"
version = "0.0.0"

# Mandatory (or Cargo tooling is unhappy)
[lib]
path = "fake_lib.rs"

[dependencies]
clap = { version = "4.2.2", features = ["derive"] }
log = "0.4.17"
prost = "0.11.6"
tonic = { version = "0.9.1", features = ["tls", "tls-roots", "default"] }
tonic-build = "0.9.1"

[package.metadata.raze]
# The path at which to write output files.
#
# `cargo raze` will generate Bazel-compatible BUILD files into this path.
# This can either be a relative path (e.g. "foo/bar"), relative to this
# Cargo.toml file; or relative to the Bazel workspace root (e.g. "//foo/bar").
workspace_path = "//third_party/rust"

# This causes aliases for dependencies to be rendered in the BUILD
# file located next to this `Cargo.toml` file.
package_aliases_dir = "."

# The set of targets to generate BUILD rules for.
targets = [
    "x86_64-unknown-linux-gnu",
]

# The two acceptable options are "Remote" and "Vendored" which
# is used to indicate whether the user is using a non-vendored or
# vendored set of dependencies.
genmode = "Remote"

default_gen_buildrs = true

[package.metadata.raze.crates.clap.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.clap_builder.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.clap_derive.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.prost.'*']
compile_data_attr = "glob([\"**/*.md\"])"

[package.metadata.raze.crates.rustls-webpki.'*']
compile_data_attr = "glob([\"**/*.der\"])"

[package.metadata.raze.crates.ring.'*']
compile_data_attr = "glob([\"**/*.der\"])"

[package.metadata.raze.crates.axum.'*']
compile_data_attr = "glob([\"**/*.md\"])"

Now lets delete the lock file and run cargo raze:

cd $HOME/repo/third_party/rust/
rm Cargo.raze.lock
cargo raze

And finally run bazel build //... to make sure we haven't broken anything yet.

Using tonic to generate protobuf and gRPC code

To get a rust protobuf/gRPC library we need to run two steps. The first is running the tonic_build generator, which we can think of as having a build.rs or cargo build script that we run first. We'll use rules_rust [cargo_build_script] to do this.

First lets make the $HOME/repo/src/proto/summation/build.rs:

fn main() -> Result<(), Box<dyn std::error::Error>> {
    tonic_build::compile_protos("./summation.proto")?;
    Ok(())
}

Next lets update $HOME/repo/src/proto/summation/BUILD to use it:

load("@rules_proto//proto:defs.bzl", "proto_library")
load("@rules_rust//cargo:cargo_build_script.bzl", "cargo_build_script")

proto_library(
    name = "proto",
    srcs = [
        "summation.proto",
    ],
    visibility = ["//visibility:public"],
)

cargo_build_script(
    name = "generate_rust_proto",
    srcs = [
        "build.rs",
    ],
    deps = [
        "//third_party/rust:tonic_build",
    ],
    build_script_env = {
        "RUSTFMT": "$(execpath @rules_rust//:rustfmt)",
        "PROTOC": "$(execpath @com_google_protobuf//:protoc)"
    },
    data = [
        "summation.proto",
        "@rules_rust//:rustfmt",
        "@com_google_protobuf//:protoc",
    ],
)

We've added a load line for cargo_build_script and then invoked that rule to run the generator. There's a lot going on here. One thing to note is Bazel uses different attributes to convey different types of dependencies. We've seen srcs and deps already, data is kind of a catch-all used when things don't fit in srcs or deps. How these attributes are used varied based on the rule, so it's useful to check the docs of the rule you're using.

In this case we're telling bazel that if the protofil, rustfmt, or protoc change we need to rerun the build script using the data attribute, and we're also telling it to expose those in the sandbox it runs the compile in.

When we run the build script, we also need to set the environment variables RUSTFMT and PROTOC so that tonic_build knows where to find those, which is what the build_script_env attribute does. The @rules_rust is us pointing to the external rules_rust bazel workspace we're depending on in our WORKSPACE file. The srcs attribute points to our build.rs file (which we could have named something else like generate_rust_proto.rs if we wanted.

Exposing in a rust library

The prior step just runs the build script. We need to add a rust_library target that includes it.

To do this we'll add this to $HOME/repo/src/proto/summation/BUILD

load("@rules_rust//rust:defs.bzl", "rust_library")

rust_library(
    name = "src_proto_summation",
    srcs = [
        "lib.rs",
    ],
    deps = [
        ":generate_rust_proto",
        "//third_party/rust:prost",
        "//third_party/rust:tonic",
    ],
    visibility = ["//visibility:public"],
)

And we'll create $HOME/repo/src/proto/summation/BUILD with

#![allow(unused)]
fn main() {
tonic::include_proto!("src_proto_summation");
}

Finally lets run bazel build //... and make sure everything builds!

There's a decent amount of boilerplate for creating the proto. If I was doing it a lot, I would make my own bazel rule that does all this for me.

Examining the generated rust file

Above we're generating the src_proto_summation.rs file. You can find the generated file in your $HOME/repo/bazel-out directory. The exact path will vary slightly, for me it's in $HOME/repo/bazel-out/k8-fastbuild/bin/src/proto/summation/generate_rust_proto.out_dir/src_proto_summation.rs

Rust gRPC Server

We'll be expending our repo to get a gRPC server binary created. In the next chapter we'll use Bazel to build this into a docker container.

We'll put the server in a new directory, $HOME/repo/src/services/summation. Again, Bazel doesn't care how we arrange our repository, at this point we've come up with this package structure:

src/proto/summation
src/summation
src/services/summation

The src/summation directory looks a little weird. We could pretty easily move it to something like src/lib/summation. We could also move things aroung to have:

src/summation/proto
src/summation/lib
src/summation/services

One feature of a monorepo where all the dependencies are self-contained is it's easier to move things after the fact. In general it's easier to make breaking and backwards incompatible changes in a monorepo. We won't do that here and we'll keep things as is.

Exposing tokio crate

We'll use tokio to run our server. If you look in $HOME/repo/third_party/rust/remote you'll see that cargo raze has already pulled down tokio because it's a transitive dependency for other things. If you look at ``$HOME/repo/third_party/rust/BUILD.bazelyou'll seecargo razemade that BUILD file and exposes third-party crates using thealiasrule. This is what exposes these crates under the//third_party/rustpath, and cargo-raze only exposes the dependencies we explictly list in$HOME/repo/third_party/rust/Cargo.toml`.

Let's update [dependencies] of $HOME/repo/third_party/rust/Cargo.toml to include tokio:

[dependencies]
clap = { version = "4.2.2", features = ["derive"] }
log = "0.4.17"
prost = "0.11.6"
tonic = { version = "0.9.1", features = ["tls", "tls-roots", "default"] }
tonic-build = "0.9.1"
tokio = "1.27"

Then rerun cargo raze:

cd $HOME/repo/third_party/rust
cargo raze

We didn't remove the lock file because we're not expecting and don't want this step to change the versions of any of our third party crates.

Creating the gRPC server

Now Let's start making $HOME/repo/src/services/summation/main.rs with:

use src_proto_summation::summation_server::SummationServer;
use std::env;
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use tonic::transport::Server;

mod my_summation;
use my_summation::MySummation;

#[tokio::main(flavor = "current_thread")]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let port = env::var("PORT")
        .map(|p| p.parse::<u16>())
        .unwrap_or(Ok(50051))?;
    let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), port);
    let summation = MySummation::new();

    Server::builder()
        .add_service(SummationServer::new(summation))
        .serve(addr)
        .await?;

    Ok(())
}

Then create $HOME/repo/src/services/summation/my_summation.rs

#![allow(unused)]
fn main() {
use src_proto_summation::summation_server::Summation;
use src_proto_summation::ComputeSumF64Request;
use src_proto_summation::ComputeSumF64Response;
use src_summation::f64::summation_f64;
use tonic::{Request, Response, Status};

pub struct MySummation {}

impl MySummation {
    pub fn new() -> Self {
        MySummation {}
    }
}

#[tonic::async_trait]
impl Summation for MySummation {
    async fn compute_sum_f64(
        &self,
        request: Request<ComputeSumF64Request>,
    ) -> Result<Response<ComputeSumF64Response>, Status> {
        let request = request.into_inner();
        let sum = summation_f64(&request.value);
        Ok(Response::new(ComputeSumF64Response { sum }))
    }
}
}

And finally make $HOME/repo/src/services/summation/BUILD:

load("@rules_rust//rust:defs.bzl", "rust_binary")

rust_binary(
    name = "server",
    srcs = [
        "main.rs",
        "my_summation.rs",
    ],
    deps = [
        "//src/proto/summation:src_proto_summation",
        "//src/summation:src_summation",
        "//third_party/rust:tokio",
        "//third_party/rust:tonic",
    ],
)

Now let's try to build with bazel build //.... Oops, it doesn't work because //src/summation:src_summation isn't visible to our new package.

Updating visibility of //src/summation:src_summation

The visibility attribute on our targets controls who can depend on a target. In //src/summation/BUILD we omitted visibility for the src_summation target, which means it defaults to only being visible to targets in that same BUILD file. So our //src/summation:executable target could depend on it, but //src/services/summation can't.

In a multi-owner repo where one team might own //src/summation and another team owns //src/services/summation this helps the first team ensure they control who can depend on them. (Usually you'll have a code review process with CODEOWNERS to ensure the //src/summation team reviews/approves any changes to visibility).

To make //src/summation:src_summation visible to /src/services/summation we'll add visibility = ["//src/services/summation:__pgk__"] to the src_summation target in $HOME/repo/src/summation/BUILD:

rust_library(
    name = "src_summation",
    srcs = [
        "lib.rs",
        "f64.rs",
        "u32.rs",
    ],
    deps = ["//third_party/rust:log"],
    visibility = ["//src/services/summation:__pkg__"],
)

Build and test

Now when we run bazel build //... everything should build.

Next, lets run our server:

bazel run -c opt //src/services/summation:server

Finally, to test it we'll use grpcurl. If you're on the debian VM we built you can use the following to get the binary:

cd $HOME/repo
curl -L https://github.com/fullstorydev/grpcurl/releases/download/v1.8.7/grpcurl_1.8.7_linux_x86_64.tar.gz -o grpcurl.tar.gz
tar -xzvf grpcurl.tar.gz grpcurl

And then run it:

cd $HOME/repo
./grpcurl -proto repo/src/proto/summation/summation.proto -plaintext -d '{"value": 5.0, "value": 2.0}' localhost:50051 src_proto_summation.Summation/ComputeSumF64

This should output:

{
  "sum": 7
}

Docker Container

Getting our binary into a docker container requires getting rules_docker setup and then using the rust_image rule it provides to build the container.

Workspace setup

Let's add the following to $HOME/repo/WORKSPACE to pull rules_docker and the rust_image related config into our WORKSPACE:

### rules_docker setup
### FROM https://github.com/bazelbuild/rules_docker#setup
###
http_archive(
    name = "io_bazel_rules_docker",
    sha256 = "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
    urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
)

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)
container_repositories()

load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")

container_deps()

load(
    "@io_bazel_rules_docker//container:container.bzl",
    "container_pull",
)

# rust_image
load(
    "@io_bazel_rules_docker//rust:image.bzl",
    _rust_image_repos = "repositories",
)

_rust_image_repos()

These rules won't work without also having an empty BUILD file in the root of the repo, so lets make that:

touch $HOME/repo/BUILD

Building container

Now lets build the rust container image by editing $HOME/repo/src/services/summation/BUILD. We're going to add a new load rule and also a rust_image target, resulting in:

load("@rules_rust//rust:defs.bzl", "rust_binary")
load("@io_bazel_rules_docker//rust:image.bzl", "rust_image")

rust_binary(
    name = "server",
    srcs = [
        "main.rs",
        "my_summation.rs",
    ],
    deps = [
        "//src/proto/summation:src_proto_summation",
        "//src/summation:src_summation",
        "//third_party/rust:tokio",
        "//third_party/rust:tonic",
    ],
)

rust_image(
    name = "server_image",
    binary = ":server",
)

That's pretty simple, we just have to tell the rust_image rule the binary target we want it to put in a container.

Let's try building it, this time using -c opt so we get the equivilant of a --release build:

bazel build -c opt //...

If you have docker installed, you can run the image using:

bazel run -c opt //src/services/summation:server_image

And then from another window you should be able to test with grpcurl:

$ grpcurl -proto $HOME/repo/src/proto/summation/summation.proto -plaintext -d '{"value": 5.0, "value": 2.0}' localhost:50051 src_proto_summation.Summation/ComputeSumF64
{
  "sum": 7
}

To bring down the container you can run docker ps to find its id and use docker kill [container id] to bring it down.

If you wanted to pass arguments to docker you can use -- in the bazel run command and then include them. For example:

bazel run -c opt //src/services/summation:server_image -- -d -p 50051:50051

Pushing containers and running on Google Cloud

rules_docker provides a container_push rule which can be used to push the container to a container/artifact registry. We'll provide an example of doing that and then using Google Cloud Run to launch our gRPC server container on Google Cloud.

You'll need gcloud and docker installed for these instructions to work.

Setting up artifact registry

Google's artifact-registry instructions say you can use the following to create the artifact registry instance we'll use below:

gcloud artifacts repositories create quickstart-docker-repo --repository-format=docker \
--location=us-central1 --description="Docker repository"

Then we need to configure authentication:

gcloud auth configure-docker us-central1-docker.pkg.dev

Pushing the container

We're going to add the container_push instructions to $HOME/repo/src/services/summation/BUILD by adding:

load("@io_bazel_rules_docker//container:container.bzl", "container_push")

### ... existing lines in file

container_push(
   name = "server_push",
   image = ":server_image",
   format = "Docker",
   registry = "us-central1-docker.pkg.dev",
   repository = "%YOUR_PROJECT_NAME%/quickstart-docker-repo/server-image",
   tag = "dev",
)

You'll need to change %YOUR_PROJECT_NAME% to your Google cloud project name for this to work.

Now you can push a new image using bazel run:

bazel run -c opt //src/services/summation:server_push

If that succeeds it should tell you where the image was pushed and the sha256.

Running on Google Cloud Run

Now we can start a cloud run service using our pushed image by running gcloud run deploy [SERVICE_NAME] --image [IMAGE_URL]. Let's try it with some additional arguments

gcloud run deploy hello-bazel-service \
  --image us-central1-docker.pkg.dev/%YOUR_PROJECT_NAME%/quickstart-docker-repo/server-image:dev \
  --allow-unauthenticated \
  --region us-central1

You'll need to change %YOUR_PROJECT_NAME% to your Google cloud project name.

This should output a service url you can use to hit the service. Something like https://hello-bazel-service-abcde-uc.a.run.app. Let's test your service using that and grpcurl again.

grpcurl -proto src/proto/summation/summation.proto -d '{"value": 5.0, "value": 2.0}' hello-bazel-service-hxqyynk4pa-uc.a.run.app:443  src_proto_summation.Summation/ComputeSumF64

You should see it return ```json { "sum": 7 }


Congratulations!

Conclusion

Congratulations on making it this far! At this point hopefully you've seen how Bazel can be used to build an end-to-end system.

One catch-22 with Bazel is most of the benefits accrue as a repo get larger and more complicated, but it's harder to migrate an existing set of repos/crates/projects to Bazel after its evolved organically. Hopefully this doc helps you see how to get started while things are simple and manageble.

PS: Tearing down

If you ran all the gcloud commands in earlier chapters you'll have created a vm, an artifact registry, and a cloud run service as part of going through the guide.

To tear these all down use the following commands:

# Delete the VM
gcloud compute instances delete hellobazel

# Delete the cloud run service
gcloud run services delete hello-bazel-service --region us-central1

# Delete the artifact registry
gcloud artifacts repositories delete quickstart-docker-repo --location=us-central1