Deployment#

When you are ready to build the code you’ve written into an adapter and deploy it to test with actual hardware, you must deploy the adapter to your IPC. The IPC is a computer on your network that can access your equipment and has a Kubernetes cluster that runs the same code you’re running on your CodeSpaces.

To do this, you must complete and deploy the build process. Once you’ve deployed it, you can monitor it.

Changing an Adapter#

After changing your adapter code, such as changing an action, you will need to follow our build process. The way Artificial does this is by creating a pull request in Github, then our CI runners will build the code and push it to a secure registry that will push it to the IPC.

After you’ve made a change in your code:

  1. Click the Source Control icon on the left-hand menu.

  2. Select the 3 dots in the top right corner of the Source Control panel #. Select Branch — Create Branch From #. Select main as the branch example. #. Create a new name.

  3. Click the + button to stage your change.

  4. Add an informative commit message describing your change.

  5. Select the drop-down next to commit and choose Commit and create pull request.

  6. Add another message, and select Create.

  7. Select Publish Branch.

  8. Then you will see a Pull Request in the window, which you can open in a browser by clicking the Pull Request number.

  9. Then, the build will kick off.

Once the build completes, you will see the adapter image that was created for testing.

Versioning#

We provide tooling around semantic versioning when you make PRs in main. When you create a PR, the hundredths-place (“patch”) version will automatically increase. We highly recommend that for production labs, you use only images built from main with a semantic version and that you do not use PR images, which are intended for development testing and validation.

If you want to make changes other than the patch version, you can commit with the message prefix “feat:” which will bump the minor version. If you instead prefix “BREAKING CHANGE”, it will bump the major version.

What Happens During the Build Process#

Builds are done using Docker, which creates images that are reusable artifacts that are then deployed. This is controlled through the Dockerfile. For the most part, you will not touch this.

All of the Python packages your adapter needs are detailed in the pyproject.toml. We use this file to lock dependencies to specific versions to prevent packages from being updated and breaking your code.

We suggest a couple of tools so you create adapter code that is consistent and enables anyone to jump in and write:

pyright

A tool that does python type checking and rule validation. The same tool that VS Code uses to validate Python code. Helpful in making sure you’re using the right type of variables.

flake8

A linter that makes sure the code is conformant and follows the suggested style guide

isort

Manages the order of import statements

pytest

Runs tests at the time of build

These tools help prevent merging PRs that may break things or reduce code quality.

If, for some reason, you don’t want to use these tools, you can comment these lines out and commit that to the PR; it will build and skip that step. We don’t recommend this, but sometimes, this may be needed if you need to bypass something to test another thing out.

Deploying an Adapter in Codespaces#

This is done in the Extensions panel of VS Code.

  1. Double-check that the context.yaml is pointed at the instance you want to deploy to.

  2. Open the command palette and type >Artificial: Update Adapter Image. You will see the adapters listed that are connected to the lab.

  3. Select the adapter.

  4. Select an adapter image to update to; this is indicated in your build comments.

You will see a message that it was updated.

Monitoring Your Adapter#

Your instance has a dedicated Elasticsearch for you to monitor your adapter remotely and also historically maintain your logs. You can access them at https://device-mgmt-prod.kibana.notartificial.xyz/ with credentials provided by your Customer Success Manager.

Deploying Labs, Assistants, and Workflows#

To publish a workflow, follow the instructions provided in the Publishing Workflows section in Deploying Workflows.

In addition to deploying adapters, you can easily upload Labs and Assistants across your different instances with the Artificial Workflow Authoring Tool.

After you make a change to your Lab, Assistant, or Workflow in one instance (i.e., dev) and want to upload it to another instance (i.e., test)…

  1. Copy the URL of the Artificial instance that you want to export your data from (i.e., dev). Make sure to include https://

  2. Open the Command Palette by clicking in the search bar or using CMD-SHIFT-P/CTRL-SHIFT-P, then type >Artificial: Sign in and press enter to run the Sign in command.

  3. Click Open when prompted and follow the window instructions.

  4. If successful, you will see two dialogs that say Sign in successful! and Add File to Context completed successfully.

  5. Go to the Artificial extension and the find bottom section labeled Lab & Assistant Export/Import.

  6. Go to the specific lab, or Assistant, and click Export. If you do not see your lab in the list, select Export ALL Lab and Assistant Data.

  7. Copy the URL of the Artificial instance that you want to import your data to (i.e., test). Make sure to include https://

  8. Sign in to your second instance by opening the Command Palette, and clicking in the search bar or using CMD-SHIFT-P/CTRL-SHIFT-P, then type >Artificial: Sign in. Press enter to run the Sign in command and follow the prompts again to open a new window to complete the Sign In process.

  9. Once successfully connected, return to the Artificial extension.

  10. Under Lab & Assistant Export/Import find the Assistant or Lab you want to upload, and click the Publish button next to it.

  11. A message will appear about overwriting cloud data. Click OK to continue.

You should now see the Lab, or Assistant in the destination instance!