Skip to content

Deploy the AI agent

The final step in the development process is deploying the AI agent. There are multiple deployment options to consider, such as deploying on your local machine for testing, deploying on a cluster within your own infrastructure, or deploying on a cloud provider.

Part of the development process covered in this guide

The framework supports Docker Compose and Kubernetes cluster deployments. Additionally, the framework automates several steps in the deployment process for AI agents registered in the Olas Protocol.

Tip

Local AI agent deployments are commonly used for testing AI agents during active development. These deployments allow you to test and validate your AI agent before minting it in the Olas Protocol, ensuring its readiness for production use.

What you will learn

This guide covers step 6 of the development process. You will learn the different types of AI agent deployments offered by the framework.

You must ensure that your machine satisfies the framework requirements, you have set up the framework, and you have a local registry populated with some default components. As a result you should have a Pipenv workspace folder with an initialized local registry (./packages) in it.

Local deployment - full workflow

We illustrate the full local deployment workflow using the hello_world AI agent as an example, both for Docker Compose and a simple Kubernetes cluster.

  1. Fetch the AI agent. In the workspace folder, fetch the AI agent from the corresponding registry:

    autonomy packages lock
    autonomy push-all
    autonomy fetch valory/hello_world:0.1.0 --service --local
    

    autonomy fetch valory/hello_world:0.1.0:bafybeib5a5qxpx7sq6kzqjuirp6tbrujwz5zvj25ot7nsu3tp3me3ikdhy --service
    
  2. Build the agent blueprint's image. Navigate to the AI agent runtime folder that you have just created and build the Docker image of the agent blueprint of the AI agent:

    cd hello_world
    autonomy build-image #(1)!
    
    1. Check out the autonomy build-image command documentation to learn more about its parameters and options.

    After the command finishes, you can check that the image has been created by executing:

    docker image ls | grep <agent_blueprint_name>
    

    You can find the agent_blueprint_name within the AI agent configuration file service.yaml.

  3. Prepare the keys file. Prepare a JSON file keys.json containing the wallet address and the private key for each of the agent instances that you wish to deploy in the local machine.

    Example of a keys.json file

    WARNING: Use this file for testing purposes only. Never use the keys or addresses provided in this example in a production environment or for personal use.

    keys.json
    [
      {
          "address": "0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65",
          "private_key": "0x47e179ec197488593b187f80a00eb0da91f1b9d0b13f8733639f19c30a34926a"
      },
      {
          "address": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
          "private_key": "0x8b3a350cf5c34c9194ca85829a2df0ec3153be0318b5e2d3348e872092edffba"
      },
      {
          "address": "0x976EA74026E726554dB657fA54763abd0C3a0aa9",
          "private_key": "0x92db14e403b83dfe3df233f83dfa3a0d7096f21ca9b0d6d6b8d88b2b4ec1564e"
      },
      {
          "address": "0x14dC79964da2C08b23698B3D3cc7Ca32193d9955",
          "private_key": "0x4bbbf85ce3377467afe5d46f804f221813b2bb87f24d81f60f1fcdbf7cbf4356"
      }
    ]
    

    You also need to export the environment variable ALL_PARTICIPANTS with the addresses of all the agent instances in the AI agent. In other words, the addresses of the agent instances you are deploying (in the keys.json file) must be a subset of the addresses in ALL_PARTICIPANTS, which might contain additional addresses:

    export ALL_PARTICIPANTS='[
        "0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65",
        "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
        "0x976EA74026E726554dB657fA54763abd0C3a0aa9",
        "0x14dC79964da2C08b23698B3D3cc7Ca32193d9955"
    ]'
    

    If you have a situation where you need to define keys for multiple ledgers you can define them using the following format

    keys.json
    [
        [
            {
                "address": "4Si...",
                "private_key": "5P1...",
                "ledger": "solana"
            },
            {
                "address": "0x1...",
                "private_key": "0x1...",
                "ledger": "ethereum"
            }
        ],
        [
            {
                "address": "H1R...",
                "private_key": "2T1...",
                "ledger": "solana"
            },
            {
                "address": "0x6...",
                "private_key": "0xc...",
                "ledger": "ethereum"
            }
        ],
        [
            {
                "address": "3bq...",
                "private_key": "5r5...",
                "ledger": "solana"
            },
            {
                "address": "0x5...",
                "private_key": "0x7...",
                "ledger": "ethereum"
            }
        ],
        [
            {
                "address": "6Gq...",
                "private_key": "25c...",
                "ledger": "solana"
            },
            {
                "address": "0x5...",
                "private_key": "0x7...",
                "ledger": "ethereum"
            }
        ]
    ]
    
  4. Build the deployment. Within the AI agent runtime folder, execute the command below to build the AI agent deployment:

    rm -rf abci_build_* #(1)!
    autonomy deploy build keys.json -ltm #(2)!
    
    1. Delete previous deployments, if necessary.
    2. -ltm stands for "use local Tendermint node". Check out the autonomy deploy build command documentation to learn more about its parameters and options.

    This will create a deployment environment within the ./abci_build_* folder with the following structure:

    abci_build_*/
    ├── agent_keys
    │   ├── agent_0
    │   ├── agent_1
    │   |   ...
    │   └── agent_<N-1>
    ├── nodes
    │   ├── node0
    │   ├── node1
    │   |   ...
    │   └── node<N-1>
    ├── persistent_data
    │   ├── benchmarks
    │   ├── logs
    │   ├── tm_state
    │   └── venvs
    └── docker-compose.yaml
    
    rm -rf abci_build_* #(1)!
    autonomy deploy build keys.json -ltm --kubernetes #(2)!
    
    1. Delete previous deployments, if necessary.
    2. -ltm stands for "use local Tendermint node". Check out the autonomy deploy build command documentation to learn more about its parameters and options.

    This will create a deployment environment within the ./abci_build_* folder with the following structure:

    abci_build_*/
    ├── agent_keys
    │   ├── agent_0_private_key.yaml
    │   ├── agent_1_private_key.yaml
    │   |   ...
    │   └── agent_<N-1>_private_key.yaml
    ├── build.yaml
    └── persistent_data
        ├── benchmarks
        ├── logs
        ├── tm_state
        └── venvs
    
  5. Execute the deployment. Navigate to the deployment environment folder (./abci_build_*) and run the deployment locally.

    cd abci_build_*
    autonomy deploy run #(1)!
    
    1. Check out the autonomy deploy run command documentation to learn more about its parameters and options.

    This will spawn in the local machine:

    • \(N\) agent instances containers, each one running an instance of the corresponding FSM App.
    • a network of \(N\) Tendermint nodes, one per agent instance.

    We show how to run the AI agent deployment using a local minikube cluster. You might want to consider other local cluster options such as kind.

    1. Create the minikube Kubernetes cluster.

      cd abci_build_*
      minikube start --driver=docker
      

    2. Install chart

      helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/
      helm install nfs-provisioner nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
          --set=image.tag=v3.0.0,resources.limits.cpu=200m,storageClass.name=nfs-ephemeral -n nfs-local --create-namespace
      
    3. Make sure your image is pushed to Docker Hub (docker push). If this is not the case, you need to provision the cluster with the agent blueprint image so that it is available for the cluster pods. This step might take a while, depending on the size of the image.

      minikube image load <repository>:<tag> # (1)!
      

      1. You can get the <repository> and <tag> by inspecting the output of docker image ls.

      In this case, you also might need to change all the instances of imagePullPolicy: Always to imagePullPolicy: IfNotPresent in the deployment file build.yaml.

    4. Define the StorageClass. Replace with your NFS provisioner and adjust per your requirements. We use minikube-hostpath as an example.

      cat <<EOF > storageclass.yaml
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
          name: nfs-ephemeral
      provisioner: kubernetes.io/no-provisioner
      volumeBindingMode: WaitForFirstConsumer
      reclaimPolicy: Retain
      EOF
      

    5. Apply all the deployment files to the cluster

      kubectl apply --recursive -f .
      

    After executing these commands, the minikube cluster will start provisioning and starting \(N\) pods in the cluster. Each pod contains:

    • one agent instance container, running an instance of the corresponding FSM App.
    • one Tendermint node associated to the agent instance.
  6. Examine the deployment.

    To inspect the logs of a single agent instance or Tendermint node you can execute docker logs <container_id> --follow in a separate terminal.

    You can cancel the local execution at any time by pressing Ctrl+C.

    You can access the cluster dashboard by executing minikube dashboard in a separate terminal. To examine the logs of a single agent instance or Tendermint node you can execute:

    1. Get the Kubernetes pod names.

      kubectl get pod
      

    2. Access the logs of the agent instance in pod <pod-name>.

      kubectl exec -it <pod-name> -c aea -- /bin/sh
      

    3. Access the logs of the Tendermint node in pod <pod-name>.

      kubectl exec -it <pod-name> -c node0 -- /bin/sh
      

    You can delete the local cluster by executing minikube delete.

Local deployment of minted AI agents

The framework provides a convenient method to deploy AI agents minted in the Olas Protocol. This has the benefit that some configuration parameters of the FSM App skill will be overridden automatically with values obtained on-chain. Namely:

skill.yaml
# (...)
models:
    params:
    args:
        setup:
        all_participants:      # Overridden with the registered values in the Autonolas protocol
        safe_contract_address: # Overridden with the registered values in the Autonolas protocol
        consensus_threshold:   # Overridden with the registered values in the Autonolas protocol

This means, in particular, that there is no need to define the ALL_PARTICIPANTS environment variable.

  1. Find the AI agent ID. Explore the AI agents section in the Olas Protocol web app, and note the token ID of the AI agent that you want to deploy. The AI agent must be in Deployed state.

  2. Prepare the keys file. Prepare a JSON file keys.json containing the wallet address and the private key for each of the agent instances that you wish to deploy in the local machine.

    Example of a keys.json file

    WARNING: Use this file for testing purposes only. Never use the keys or addresses provided in this example in a production environment or for personal use.

    keys.json
    [
      {
          "address": "0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65",
          "private_key": "0x47e179ec197488593b187f80a00eb0da91f1b9d0b13f8733639f19c30a34926a"
      },
      {
          "address": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
          "private_key": "0x8b3a350cf5c34c9194ca85829a2df0ec3153be0318b5e2d3348e872092edffba"
      },
      {
          "address": "0x976EA74026E726554dB657fA54763abd0C3a0aa9",
          "private_key": "0x92db14e403b83dfe3df233f83dfa3a0d7096f21ca9b0d6d6b8d88b2b4ec1564e"
      },
      {
          "address": "0x14dC79964da2C08b23698B3D3cc7Ca32193d9955",
          "private_key": "0x4bbbf85ce3377467afe5d46f804f221813b2bb87f24d81f60f1fcdbf7cbf4356"
      }
    ]
    
  3. Fetch the AI agent. Fetch the AI agent from the remote registry using token ID.

    autonomy fetch <TOKEN_ID> --use-mode # (1)!
    
    1. --use-mode indicates that the AI agent is registered in the Mode network. Check out the autonomy fetch command documentation to learn more about its parameters and options.

    Fetch the AI agent with the desired token ID on Mode network.

  4. Build the agent blueprint's image. Build the Docker image of the agent blueprint of the AI agent.

    autonomy build-image --service-dir your_ai_agent/ # (1)!
    
    1. Check out the autonomy build-image command documentation to learn more about its parameters and options.

    This command builds the Docker runtime images for the agent blueprint defined in an AI agent configuration file service.yaml.

  5. Build the deployment. Build the AI agent deployment.

    cd your_ai_agent/
    

    This command must be executed within an AI agent folder. That is, a folder containing the AI agent configuration file (service.yaml). The deployment will be created in the subfolder ./abci_build_*.

    autonomy deploy build path/to/keys.json -ltm # (1)!
    
    1. Check out the autonomy deploy build command documentation to learn more about its parameters and options.
    autonomy deploy build path/to/keys.json --kubernetes # (1)!
    
    1. Check out the autonomy deploy build command documentation to learn more about its parameters and options.
  6. Start the AI agent. Run the AI agent:

    autonomy deploy run # (1)!
    
    1. Check out the autonomy deploy run command documentation to learn more about its parameters and options.

    Run an AI agent deployment locally stored.

Cloud deployment

The sections above for local deployments provide a fundamental understanding of how to deploy AI agents in general. The Open Operator repository provides the necessary resources and guidelines for seamless cloud deployments of AI agents based on the Open Autonomy framework.