Main Note Contacts

Cell

trick or treat

«Let us go then, you and I,
When the evening is spread out against the sky
Like a patient etherized upon a table;
Let us go, through certain half-deserted streets,
The muttering retreats
Of restless nights in one-night cheap hotels
And sawdust restaurants with oyster-shells:
Streets that follow like a tedious argument
Of insidious intent
To lead you to an overwhelming question …
Oh, do not ask, “What is it?”
Let us go and make our visit.».

T.S.Eliot «The Love Song of J. Alfred Prufrock».

Copy volume via SSH (security)

dd if=/dev/VolGroup01/kvm666_img | ssh root@IP-adress «dd of=/dev/VolGroup01/kvm666_img»

Copy volume via NETCAT (speed)

Remote into local

nc -l 19000 | bzip2 -d|dd bs=16M of=/dev/VolGroup01/kvm666_img

Local to remote

dd bs=16M if=/dev/VolGroup01/kvm358_img |bzip2 -c|nc IP-remote-server 19000

Port scan nmap

#!/bin/bash
ports=$(nmap -p --min-rate=500 $1 | grep ^[0-9] | cut -d '/' -f 1 | tr '\n' ',' | sed s/,$//)
nmap -p $ports -A $1 

SSH Tunnels

Copy via tunnel

ssh -L 1234:remote2:22 -p 45678 user1@remote1

Then, use the tunnel to copy the file directly from remote2

scp -P 1234 user2@localhost:file

ProxyJump is safer than SSH agent forwarding

ssh multiple jump

ssh_config


   Host myserver
         HostName myserver.example.com
         User virag
         IdentityFile /users/virag/keys/myserver-cert.pub
         ProxyJump jump
   Host bastion
         #Used because HostName is unreliable as IP address changes frequently
         HostKeyAlias bastion.example
         User external 
   Host jump
         HostName jump.example.com
         User internal 
         IdentityFile /users/virag/keys/jump-cert.pub
         ProxyJump bastion        
      

Strace connect to user processes

USER=Vasya
while true; do a=$(ps aux | grep 'name process' | grep $USER | awk '{print $2}' |
head -n 1); echo "DEBUG pid is "$a; if [ -z $a ] ; then echo -e 'no pid now\n';
else echo "pid is "$a; strace -p $a; fi; sleep 1; done

How to re-register for your Red Hat Developer Subscription

sudo subscription-manager remove --all
sudo subscription-manager unregister
sudo subscription-manager clean
sudo subscription-manager register
sudo subscription-manager refresh
sudo subscription-manager attach --auto

JQ parsing


Get pretty-printed version

cat test.json | jq '.'

Get only metadata section
cat test.json | jq '.metadata'

Accessing field inside metadata
cat test.json | jq '.metadata .key'

Accessing values inside arrays
cat test.json | jq '.metadata[0]'

Using conditionals
cat test.json |  jq .[] | jq 'select(.name == "vtb-rheltest-01").nics[1].ipAddress'

ipmitool shortcuts

ipmitool -I lanplus -H FQDN -U username -P 'hardpasswithspecialcymbols' chassis status -L user

HARD cold reboot of Linux (carefull, almost as IMPI reset)

echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger

stupd firewalld

firewall-cmd --permanent --zone=public --add-port=2234/tcp

Getting netbox token via curl

curl -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json; indent=4" \
https://netbox.domain.com/api/users/tokens/provision/ \
--data '{
    "username": "user@domain.com",
    "password": "password",
}' 

journalctl

journalctl --since "2022-01-01 17:15:00"

Get ram usage staticstics

ps aux  | awk '{print $6/1024 " MB\t\t" $11}'  | sort -n

Apt - adding new deb packages in repo

apt-get install dpkg-dev  
mkdir -p /usr/local/mydebs
cd /usr/local/mydebs
dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
#Get files locally via apt  
echo "deb file:/usr/local/mydebs ./ >> /etc/apt/sources.list
apt update"

SElinux

basic stuff for surviving with docker and se

semodule -l|grep container
semanage fcontext -l|grep /var/lib/docker
grep avc /var/log/audit/audit.log
restorecon -Frv /var/lib/docker/overlay2/*
grep docker /etc/selinux/targeted/contexts/files/file_contexts

Restore right labels

semanage fcontext -a -t container_var_lib_t '/var/lib/docker(/.*)?'
semanage fcontext -a -t container_share_t '/var/lib/docker/.*/config/\.env'
semanage fcontext -a -t container_file_t '/var/lib/docker/vfs(/.*)?'
semanage fcontext -a -t container_share_t '/var/lib/docker/init(/.*)?'
semanage fcontext -a -t container_share_t '/var/lib/docker/overlay(/.*)?'
semanage fcontext -a -t container_share_t '/var/lib/docker/overlay2(/.*)?'
semanage fcontext -a -t container_share_t '/var/lib/docker/containers/.*/hosts'
semanage fcontext -a -t container_log_t '/var/lib/docker/containers/.*/.*\.log'
semanage fcontext -a -t container_share_t '/var/lib/docker/containers/.*/hostname'

Pip upload whl to PYPIserver

pip install twine
twine upload file_name.whl --repository-url https://pip.server_name.com/

Pip download from nexus

pip download -v -i http://nexus.ru/repository/pypi/simple --trusted-host nexus.ru --only-binary all --no-deps python-novaclient==7.1.2+202006160836.gitdbd0175

OpenStack: compute (nova)

CLI

# list hypervisor details
openstack hypervisor list --long
             
# list VMs with availability zone
openstack server list --long -c ID -c Name -c Status -c Networks -c "Image Name" -c "Flavor Name" -c "Availability Zone"
             
# list VMs on all hypervisor
openstack server list --all --long  -c ID -c Name -c Host
             
# list VMs on specific hypervisor
openstack server list --all-projects --host ${COMPUTE_NODE}
             
# get VM count by hypervisor
openstack server list --all --long  -c Host -f value | sort | uniq -c
             
# list compute nodes
openstack compute service list --service nova-compute
             
# list compute service
openstack compute service list --host ${OS_NODE}
             
# add / enable compute service
openstack compute service set --enable com1-dev nova-compute
             
# disable compute service
for OS_SERVICE in $(openstack compute service list --host ${OS_NODE} -c Binary -f value); do
openstack compute service set --disable --disable-reason "Maintenance" ${OS_NODE} ${OS_SERVICE}
done
             
# Search for server witch status error
openstack server list --all --status ERROR
             
# Search for server with status resizing
openstack server list --all --status=VERIFY_RESIZE
             
# List instances / VMs
openstack server list
openstack server list -c ID -c Name -c Status -c Networks -c Host --long
             
# Show VM diagnostics / statistics
nova diagnostics ${SERVER_ID}
openstack server show --diagnostics ${SERVER_ID}
             
# show hypervisor usage
openstack usage list

Disable compute node

openstack compute service set --disable os-com2-dev nova-compute
openstack hypervisor list 
openstack compute service list --service nova-compute
openstack aggregate show 

Debug


# Search for server processes on wrong compute node
for COMPUTE_NODE in $(openstack compute service list --service nova-compute -c Host -f value); do
    for UUID in $(ssh ${COMPUTE_NODE} pgrep qemu -a | grep -o -P '(?<=-uuid ).*(?= -smbios)'); do
        VM_HOST=$(openstack server show -c "OS-EXT-SRV-ATTR:host" -f value ${UUID})
        if [ -z "${VM_HOST}" ]; then
            echo "Server process ${UUID} on ${COMPUTE_NODE} not available in OpenStack"
        else
            if [ "${VM_HOST}" != "${COMPUTE_NODE}" ]; then
               echo "VM ${UUID} on wrong compute node ${COMPUTE_NODE}"
            fi
       fi
      done
 done

Remove compute service / server

openstack server list --all-projects --host ${NODE_ID}
openstack compute service list --host ${NODE_ID}
openstack compute service delete ${NODE_ID}

Manually rebalance VMs


 # show hypervisor usage
 openstack hypervisor list --long
                   
# get processes with uses swap
grep VmSwap /proc/*/status | grep -v " 0 kB"
                   
# get VMs with high CPU usage
ssh compute-node-2
                   
# VMs by CPU usage
ssh ${COMPUTE_NODE} ps -eo pid,%cpu,cmd --sort="-%cpu" --no-headers | head -5 | grep -o -P '^[0-9]?.*(?<=-uuid ).*(?= -smbios)\b' | awk '{ print $1,$2,$NF }'
                   
# VMs by RAM usage
ssh ${COMPUTE_NODE} ps -eo pid,size,cmd --sort="-size" --no-headers | head -5 | grep -o -P '^[0-9]?.*(?<=-uuid ).*(?= -smbios)\b' | awk '{ print $1,$2,$NF }'
                   
openstack server show ${SEVER_ID}
                   
# live migrate VM to specific hypervisor
openstack server list --all --status ACTIVE --host comX-stage | grep large
openstack server migrate --os-compute-api-version 2.30 --live-migration --wait --host comX-stage ${SEVER_ID}

evacuate

openstack server list --all-projects --host com3-dev
openstack server set --state error 8041442a-9775-47c8-91be-e27286e731bd
nova evacuate 8041442a-9775-47c8-91be-e27286e731bd

aggregate

openstack aggregate list
openstack aggregate show 9
openstack aggregate add host 9 com10-stage

Add compute node

penstack compute service list
vi /etc/kolla/inventory
...
[external-compute]
new_compute_node_2
...
             
 cd /etc/kolla/config/foo
kolla-ansible -i inventory deploy --limit comX-dev -e 'ansible_python_interpreter=/usr/bin/python3'

Remove compute node

COMPUTE_HOST=com1-dev
# ensure all VMs are migrated out from the compute node
openstack server list --all-projects --host ${COMPUTE_HOST}
             
# remove compute service
COMPUTE_SERVICE_ID=$(openstack compute service list --service nova-compute --host ${COMPUTE_HOST} -c ID -f value)
echo ${COMPUTE_SERVICE_ID}
openstack compute service delete ${COMPUTE_SERVICE_ID}
             
# remove network service
NETWORK_AGENT_ID=$(openstack network agent list --host ${HYPERVISOR_ID} -c ID -f value)
echo ${NETWORK_AGENT_ID}
openstack network agent delete ${NETWORK_AGENT_ID}
             
# OPTIONAL: check no remaining resource_providers_allocations
http://www.cloud/openstack/resource-provider
             
# OPTIONAL: delete resource provider
openstack catalog list | grep placement
PLACEMENT_ENDPOINT=http://nova-placement.service.dev.i.example.com:8780
             
TOKEN=$(openstack token issue -f value -c id)
curl ${PLACEMENT_ENDPOINT}/resource_providers -H "x-auth-token: ${TOKEN}" | python -m json.tool
             
# delete resource provider
UUID=bf003af0-3541-4220-a5d5-c7c2e57abf22
curl ${PLACEMENT_ENDPOINT}/resource_providers/${UUID} -H "x-auth-token: $TOKEN" -X DELETE

Securing Kolla Ansible passwords with Hashicorp Vault

Feature link : https://github.com/openstack/kolla-ansible/

Generating Kolla passwords

The Kolla Ansible cli allows an operator to generate a full set of randomised passwords by using the ‘kolla-genpwd’ command:

$ cp kolla-ansible/etc/kolla/passwords.yml /etc/kolla/
$ kolla-genpwd
$ cat /etc/kolla/passwords.yml
aodh_database_password: acK1KZ1tulbzw3RjKrQC5zyxDrXMxKbHxYJR1ebX
aodh_keystone_password: 3NQDmG7PQPLV5NGg4onieMwAEoSGSDFb7fEJ5N5T
barbican_crypto_key: PugFHSE-U2cwLCqKojrltSuoGNWrzXD9gGk_XvP1Nbc=
barbican_database_password: lidQNGCxMnuXLNpggmtijYrRTAuXIBbdJoPCjtJx
barbican_keystone_password: eSacePFcfBxMs5fPysg44DEqzjwrPeMO8PbFaPKM
barbican_p11_password: ikO6saciMsYFGN5I17vmwPeOZvKLb0294fnCSeKH

Setting up Hashicorp Vault

The configuration and lockdown of your Vault policies and approles will largely depend on the deployment of your Hashicorp Vault server, but for the purpose of demonstration, I will include an example approle called “kolla”, which has write access into a key value (KV) secrets engine called “production”: 1) Administrator sets up the approle and policy:

$ vault auth enable approle
$ cat << EOF | vault policy write policy-kolla-ansible -
path "production/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}
EOF
$ vault write auth/approle/role/kolla \
    secret_id_ttl=10m \
    token_ttl=20m \
    token_max_ttl=30m \
    token_policies=policy-kolla-ansible

2) Operator generates a role-id and secret-id to authenticate to Vault:

$ vault read auth/approle/role/kolla/role-id
role_id     8f7ca1ff-8e5c-a924-3314-521dcbab304d
$ vault write -f auth/approle/role/kolla/secret-id
secret_id               fb932ccd-381e-188b-c1dd-a7ace9dd1be4
secret_id_accessor      079f7937-4697-40be-afaf-18bf63be230a
secret_id_ttl           10m

Writing the passwords

Now that we have a set of passwords we need to write them into our Vault KV using the ‘kolla-writepwd’ command:

$ kolla-writepwd \
    --passwords /etc/kolla/passwords.yml \
    --vault-addr 'https://vault.example.com' \
    --vault-role-id 8f7ca1ff-8e5c-a924-3314-521dcbab304d \
    --vault-secret-id fb932ccd-381e-188b-c1dd-a7ace9dd1be4 \
    --vault-mount-point production

In Vault this would look like:

$ vault kv list production/kolla_passwords
Keys
----
aodh_database_password
aodh_keystone_password
barbican_crypto_key
barbican_database_password
barbican_keystone_password
barbican_p11_password
...
$ vault kv get secret/kolla_passwords/aodh_database_password
====== Metadata ======
Key              Value
---              -----
created_time     2021-06-27T18:30:08.405201929Z
deletion_time    n/a
destroyed        false
version          1

====== Data ======
Key         Value
---         -----
password    acK1KZ1tulbzw3RjKrQC5zyxDrXMxKbHxYJR1ebX

Reading the passwords

Finally, when we want to update our Kolla Ansible deployment we can read the passwords back from Vault and generate a passwords.yml file using the ‘kolla-readpwd’ command:

$ cp kolla-ansible/etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ kolla-readpwd \
    --passwords /etc/kolla/passwords.yml \
    --vault-addr 'https://vault.example.com' \
    --vault-role-id 8f7ca1ff-8e5c-a924-3314-521dcbab304d \
    --vault-secret-id fb932ccd-381e-188b-c1dd-a7ace9dd1be4 \
    --vault-mount-point production

Testing Fungsional

It is a script that will create private network,public network, cirros image, and flavors. You can download it in init-runonce.sh

Ceph


Legacy start osd via service:

start ceph-osd id=13

some fstab entries:

/dev/sdk3 /var/lib/ceph/osd/ceph-13 xfs rw,relatime,attr2,inode64,allocsize=4096k,logbsize=256k,noquota 0 0

Ceph lockfiles (crushed instances filesystems after evacuation)

Solution: add capability to issue osd blacklist commands to OS clients
ceph auth caps client. mon 'allow r, allow command "osd blacklist"' osd ''
list existing blacklist rules
ceph osd blacklist ls

ZFS


Create zpool (many disks)

#!/bin/bash
ROOT=(nvme0n1 nvme1n1)

wwns=($(lsblk -n -d -o WWN,NAME | grep -v ${ROOT[0]} | grep -v ${ROOT[1]} | cut  -d " " -f  1))
((disks=${#wwns[@]} - 2))

COMMAND="zpool create tank"
for index in ${!wwns[@]}
do
if [[ $index == $disks ]]
then
  break;
fi
if [[ $(expr $index % 4) == 0 ]]
then
  COMMAND=$COMMAND" raidz2"
fi
COMMAND=$COMMAND" nvme-"${wwns[index]}
done

$COMMAND -f
zfs create -V 5124gb tank/instances
pvcreate /dev/zd0
vgcreate instances /dev/zd0

systemctl restart nova-compute.service