This section will describe hardware and software prerequisites, installing Confidential Containers with an operator, verifying the installation, and running a pod with Confidential Containers.
This is the multi-page printable view of this section. Click here to print.
Getting Started
- 1: Prerequisites
- 1.1: Hardware Requirements
- 1.1.1: CoCo without Hardware
- 1.1.2: Secure Execution Host Setup
- 1.1.3: SEV-SNP Host Setup
- 1.1.4: SGX Host Setup
- 1.1.5: TDX Host Setup
- 1.2: Cloud Hardware
- 1.3: Cluster Setup
- 2: Installation
- 3: Simple Workload
1 - Prerequisites
This section will describe hardware and software prerequisites, installing Confidential Containers with an operator, verifying the installation, and running a pod with Confidential Containers.
1.1 - Hardware Requirements
Confidential Computing is a hardware technology. Confidential Containers supports multiple hardware platforms and can leverage cloud hardware. If you do not have bare metal hardware and will deploy Confidential Containers with a cloud integration, continue to the cloud section.
You can also run Confidential Containers without hardware support for testing or development.
The Confidential Containers operator, which is described in the following section, does not setup the host kernel, firmware, or system configuration. Before installing Confidential Containers on a bare metal system, make sure that your node can start confidential VMs.
This section will describe the configuration that is required on the host.
Regardless of your platform, it is recommended to have at least 8GB of RAM and 4 cores on your worker node.
1.1.1 - CoCo without Hardware
For testing or development, Confidential Containers can be deployed without any hardware support.
This is referred to as a coco-dev
or non-tee
.
A coco-dev
deployment functions the same way as Confidential Containers
with an enclave, but a non-confidential VM is used instead of a confidential VM.
This does not provide any security guarantees, but it can be used for testing.
No additional host configuration is required as long as the host supports virtualization.
1.1.2 - Secure Execution Host Setup
TODO
1.1.3 - SEV-SNP Host Setup
TODO
1.1.4 - SGX Host Setup
TODO
1.1.5 - TDX Host Setup
TODO
1.2 - Cloud Hardware
Note
If you are using bare metal confidential hardware, you can skip this section.Confidential Containers can be deployed via confidential computing cloud offerings. The main method of doing this is to use the cloud-api-adaptor also known as “peer pods.”
Some clouds also support starting confidential VMs inside of non-confidential VMs. With Confidential Containers these offerings can be used as if they were bare-metal.
1.3 - Cluster Setup
Confidential Containers requires Kubernetes. A cluster must be installed before running the operator. Many different clusters can be used but they should meet the following requirements.
- The minimum Kubernetes version is 1.24
- Cluster must use
containerd
orcri-o
. - At least one node has the label
node-role.kubernetes.io/worker=
. - SELinux is not enabled.
If you use Minikube or Kind to setup your cluster, you will only be able to use runtime classes based on Cloud Hypervisor due to an issue with QEMU.
2 - Installation
Note
Make sure you have completed the pre-requisites before installing Confidential Containers.Deploy the operator
Deploy the operator by running the following command where <RELEASE_VERSION>
needs to be substituted
with the desired release tag.
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
For example, to deploy the v0.10.0
release run:
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
Wait until each pod has the STATUS of Running.
kubectl get pods -n confidential-containers-system --watch
Create the custom resource
Creating a custom resource installs the required CC runtime pieces into the cluster node and creates the runtime classes.
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=<RELEASE_VERSION>
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=<RELEASE_VERSION>
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>
Note
If using enclave-cc with SGX, please refer to this guide for more information on setting the custom resource.Wait until each pod has the STATUS of Running.
kubectl get pods -n confidential-containers-system --watch
Verify Installation
See if the expected runtime classes were created.
kubectl get runtimeclass
Should return
NAME HANDLER AGE
kata kata-qemu 8d
kata-clh kata-clh 8d
kata-qemu kata-qemu 8d
kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d
NAME HANDLER AGE
kata kata-qemu 60s
kata-qemu kata-qemu 61s
kata-qemu-se kata-qemu-se 61s
NAME HANDLER AGE
enclave-cc enclave-cc 9m55s
Runtime Classes
CoCo supports many different runtime classes. Different deployment types install different sets of runtime classes. The operator may install some runtime classes that are not valid for your system. For example, if you run the operator on a TDX machine, you might have TDX and SEV runtime classes. Use the runtime classes that match your hardware.
Name | Type | Description |
---|---|---|
kata |
x86 | Alias of the default runtime handler (usually the same as kata-qemu ) |
kata-clh |
x86 | Kata Containers (non-confidential) using Cloud Hypervisor |
kata-qemu |
x86 | Kata Containers (non-confidential) using QEMU |
kata-qemu-coco-dev |
x86 | CoCo without an enclave (for testing only) |
kata-qemu-sev |
x86 | CoCo with QEMU for AMD SEV HW |
kata-qemu-snp |
x86 | CoCo with QEMU for AMD SNP HW |
kata-qemu-tdx |
x86 | CoCo with QEMU for Intel TDX HW |
kata-qemu-se |
s390x | CoCO with QEMU for Secure Execution |
enclave-cc |
SGX | CoCo with enclave-cc (process-based isolation without Kata) |
3 - Simple Workload
Creating a sample Confidential Containers workload
Once you’ve used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the kata-qemu-coco-dev
runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
Setting the runtimeClassName
is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the guides for
more details.
With Confidential Containers, the workload container images are never downloaded on the host. For verifying that the container image doesn’t exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it nginx.yaml
) .
Create the workload:
kubectl apply -f nginx.yaml
Output:
pod/nginx created
Ensure the pod was created successfully (in running state):
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s