Category: 學習心得 – Learning

Restart CORD-in-a-Box gracefully

Restart CORD-in-a-Box gracefully

We will need to restart CORD-in-a-Box in many reasons, like as server’s power was cut off, or you need to move server from one place to another place, there always are many possibilities to encounter this situation. In this article, I will have a brief introduction about how to restart CiaB and make it works perfectly.

So I’ll start from a CiaB, and it was just built successfully.

Stop/Start Vagrant VM

We can list all vm by the following command:

We can use vagrant halt fdb9229 to stop virtual machines, and start it by vagrant up fdb9229 --provider libvirt command.

But due to compute node’s characteristic, we can’t start compute node before head node, that’s because compute node depends on pxe boot up, so we have to make head node up first, and the MAAS will comes up automatically. After MAAS was already started, compute node can be boot now.

Delete all neutron network

The official script have a command to quickly clean-up the openstack neutron environment, make clean-openstack, but this command will not going to delete all network, so we have to delete network manually.

Last step: iptables forwarding policy

After vagrant started, the iptables policy changed from ACCEPT to DROP, this may cause your instance don’t have internet connectivity.

So we should turn it on by following command.

Then if your instance have public network, it can ping to Google DNS now.

Onboard custom service in CORD-4.1

Onboard custom service in CORD-4.1

This article will describe how to onboard your service in CORD-4.1, and modify official pod-test for create instance.

Installation CORD in a Box

Download source code from GitHub.

Execute script with -v argument indicates using Vagrant to create Virtual POD (CORD in a box).

Generate configuration files and start build.

CORD Architecture

We can devide CORD into 3 part:

  • corddev: we develop our service on this vm.
  • head1: headnode, called prod before, most service run here.
  • compute1: compute node, it will be gave a random name like: overdue-boys.

And when we develop services on CORD, we put our service in corddev:/opt/cord/orchestration/xos_services, /opt/cord mounted from localhost:~/cord/, so you can modify services and config in localhost, instead of ssh to corddev.

There is a big change between version and version. In cord-4.0, CORD uses milestone folder to record which step have done, so deployment will use same command as build CORD.

Deploy Services Flow

platform-install/profile-manifests/mcord.yml

It doesn’t matter which kind of CORD you use, but I use M-CORD for demonstration here.

Before we start to deploy our service, I assume you already have your service in xos_services folder. If not, you can clone one from my github repository, or just use exampleservice for practice.

Modify your cord-profile as following, append your service configuration in xos_services block:

Create service synchronizer

In CORD architecture, when you upload an yaml config to XOS TOSCA engine, it will call service-synchronizer to deal with 2 things: create serviceinstance and sync instance. In the sync step, it will use the ansible playbook wrote in your synchronizer folder.

So here comes a brief step to put your synchroinzer in CORD:

  • register service in ~/cord/.repo/manifest.xml.
  • make sure service’s version control have opencord/cord-4.1 branch.
  • add service information in docker_images.yml
  • put service in genconfig/config.yml, in docker_image_whitelist block, and this step need to be done in both corddev & headnode, unless re-run the copy-config step.

Write onboard playbook in platform-install

you can use exampleservice as example, it contains several play roles in make pod-test. Or use my service playbook.

  • maas-test-client-install
  • test-subscriber-config
  • xos-onboard-hosts
  • test-subscriber-enable
  • test-vsg
  • exampleservice-config
  • test-exampleservice

But the keystep to create a single exampleservice instance, is the last 2 steps, so take a look and modify it.

We can use make pod-test as our templates, copy their playbooks and modify it to fit our needs.

Let’s take a look at pod-test part in Makefile, it will ssh to headnode, and send command to make it run the pod-test-playbook.yml with ansible.

So, the pod-test-playbook.yml does the following things:

  • Create test client
  • Create test subscriber
  • Enable test subscriber
  • onboard exampleservice
  • test exampleservice

But we only need to onbord our custom service, and create a serviceinstance, there is no need to create test client and subscriber (of course you can create one if you want).

Add custom command in Makefile

If you don’t want to modify original pod-test file, you can add by youself.

Copy Ansible playbook and Ansible roles folder

Modify Ansible files

Modify Ansible Playbook

In this part, I skip paste code in here, just remind you, please make sure your configuration have right correspond path with your service file.

Next step

After you done, run commands below:

make xos-teardown will clean all docker image, stop xos-related container, include services in CORD profile.

make clean-openstack will clean all networks and instances from openstack, you will get a clean compute node.

make -j4 build will base on milestone folder to decide which step should CORD build, so it will start from start-xos.

make compute-node-refresh will make xos synchronizer get compute node information, and take it as available node to put instance in, if this step wasn’t done, maybe you will get a No suitable nodes to pick from.

make myservice will running your playbook and send yaml to TOSCA, then your synchroinzer will create instance.

Appendix: Debug Services

If you find xos can’t start normally (ansible failed in somewhere), maybe your service cause some problems, here is a debug guide.


(source: https://guide.opencord.org/xos-gui/architecture/)

Our service will become part of xos-core, you can use docker ps to find our target:

We need to inspect mcord_xos_core_1, this container’s log will tell us where is the problem.

Find Log by docker log

docker log is very useful command, espectially when your container was keeping restart, usually this situation means process in container dead and dead again.

Find Log by docker exec

Use docker exec command can let you execute a shell in container, and find some informations / logs in it.

Also, if container runs not only 1 process, you can take a look with process’s stdout/stderr by: