Friday, August 3, 2018

IP-based rules in Puppet Manifests

When it comes to infrastructure configuration management, almost everyone has to deal with IP addressing. It could be that an engineer configures it, validates it, or references it. One of the popular tools for infrastructure configuration management is Puppet. This post is about working with IP addressing objects in Puppet manifests. An IP addressing object is a subnet, a single IP address, or a collection of them. A Puppet manifest is a set of rules defining what a configuration of a particular system should be. Let’s start by defining a goal. Here, the goal is to configure the servers based on the following rules:
  • The servers in subnet 10.1.1.0/24 should have variable datacenter set to NY
  • The servers in subnet 10.2.2.0/24 should have variable datacenter set to CH
  • The servers that are neither in 10.1.1.0/24 nor in 10.2.2.0/24 should have variable datacenter set to NA
The variable datacenter is necessary to further configure the system. For example, contact information for technical support staff for a server could be different based on its datacenter location. For example, NY could be Amazon-based, while CH could be Microsoft Azure-based. Additionally, by default all of our servers:
  • use eth0 interface for network connectivity
  • the interface is configured with a single IP address

Interface Discovery

First step is to install and configure Puppet standard library.
puppet module install puppetlabs-stdlib
Next, reference it in init.pp of the module:
class example () inherits example::params {
  include stdlib
...
Second step is to understand what is the IP address of a particular interface. The below Puppet manifest discovers the IP address, network address, and network mask of eth0 interface.
# params.pp
class example::params {
  $ifname = 'eth0'
 
  if $networking == "" {
    fail("Failed to find networking stack")
  }
  
  unless has_key($networking, 'interfaces') {
    fail("Failed to find 'interfaces' key in networking stack")
  }
  unless has_key($networking['interfaces'], $ifname) {
    fail("Failed to find interface '$ifname' in host networking stack")
  }
  unless has_key($networking['interfaces'][$ifname], 'bindings') {
    fail("The interface '$ifname' has no IP address bindings")
  }
The class defined module example and this particular file is params.pp. Next, $ifname defines the interface for network communications. The first if conditional statement checks whether networking key is in Puppet Facter facts. The unless conditional statement make this module fail when networking key in the facts does not have a sub-key interfaces. The next unless makes sure that eth0 is in the list of the interfaces. Then, it check for bindings key.
networking -> interfaces -> eth0 -> bindings
If the module reached this point without a failure, the server has eth0 interface. The bindings contains a list of IP addresses associated with an interface.
networking:
  interfaces:
    eth0:
      bindings:
      - address: 10.1.1.15
        netmask: 255.255.255.0
        network: 10.1.1.0

Here, eth0 must have a single IP address associated with the interface. Let’s continue with the module rules:
# params.pp
  ...
  unless ($networking['interfaces'][$ifname]['bindings'].length == 1) {
    fail("The interface '$ifname' must have a single IP address associated with it")
  }
  $ifbinding = $networking['interfaces'][$ifname]['bindings'][0]
  ['address', 'network', 'netmask'].each |String $binding_attr| {
    unless has_key($ifbinding, $binding_attr) {
      fail("The interface '$ifname' does not have '$binding_attr'")
    }
    unless ($ifbinding[$binding_attr] != "") {
      fail("The interface '$ifname' has empty $binding_attr")
    }
  }
  $ifipaddr = $ifbinding['address']
  $ifipnetwork = $ifbinding['network']
  $ifipnetmask = $ifbinding['netmask']
  debug("The interface '$ifname' belongs to $ifipnetwork/$ifipnetmask network and has IP address $ifipaddress")

The above snippet checks whether the interface has subnet information and whether the information is not empty. Upon success, it causes the module to create additional variables and output a message when debug is enabled. Importantly, the three variables are strings. The IP address based rules compute information using special objects, typically integers.

IP Objects and Network Standard Library

Our next task is to create these special objects for the interface IP address and interface subnet. For that we would need to use Yelp's yelp-netstdlib library available from Puppet Forge. The below command installs the library.
puppet module install yelp-netstdlib --version 0.0.1
Next, add the reference to the library to init.pp of your own class, e.g.:
class example () inherits example::params {
  include stdlib
  include netstdlib  
  ...
As the result, the Yelp’s network standard library functions become available in params.pp. The Yelp’s library has a good interface to deal with IP addresses, MAC addresses, and DNS operations. This post focuses on IP addressing operations. Let's continue with the module rules. Here, we will create if else conditional statement and use ip_in_cidr()function. In this example, the structure is inside the module. However, the best practice is to keep that data in hiera or facts and write a custom function for your module.
# params.pp
  ...
  if ip_in_cidr($ifipaddr, "10.1.1.0/24") {
    $datacenter = "NY"
  } elsif ip_in_cidr($ifipaddr, "10.2.2.0/24") {
    $datacenter = "CH"
  } else {
    $datacenter = "NA"
  }

Labels:

How to enable external Caddy Plugins

The below explains how to enabled external caddy plugins. Here, caddyserver/forwardproxy plugin is being enabled.
First, get Golang:
cd /usr/local &&
curl -s -L -O https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
tar -xvzf go1.10.1.linux-amd64.tar.gz
Next, add the following ~/.bash_profile:
export GOPATH=$HOME/go
export GOBIN=$HOME/go/bin
export GOROOT=/usr/local/go
Also, add GOBIN to the PATH environment variable:
PATH=$GOBIN:$PATH
export PATH
After this step, you have Golang compiler ready in /usr/local/go/bin/ directory:
# /usr/local/go/bin/go version
go version go1.10.1 linux/amd64
Download the source code of caddy and caddyserver/forwardproxy plugin:
go get github.com/mholt/caddy/caddy
go get github.com/caddyserver/builds
go get github.com/caddyserver/forwardproxy
Add a reference to caddyserver/forwardproxy plugin in caddy source code. Find "This is where other plugins get plugged in" in $GOPATH/src/github.com/mholt/caddy/caddy/caddymain/run.go file and add a reference to the plugin.
_ "github.com/caddyserver/forwardproxy"
Finally, build caddy binary:
cd $GOPATH/src/github.com/mholt/caddy/caddy
go run build.go
caddy --version
The http.forwardproxy plugin is in the list of enabled plugins:
# caddy -plugins
Server types:
  http
Caddyfile loaders:
  short
  flag
  default
Other plugins:
  http.forwardproxy

Labels: , ,

Securing Prometheus with Caddy

The below explains how to secure access to a Prometheus server. By default, a Prometheus server instance uses insecure HTTP protocol on port 9090.
To secure a Prometheus server instance, do the the following:
  • Block off-box access to TCP/9090
  • Allow HTTPS access to port TCP/8443
  • Install caddy web server
  • Configure caddy to run on TCP/8443
  • Configure caddy to proxy requests coming to port 8443 to the Prometheus server running on port 9090
  • Configure authentication of the requests using basic authentication

Firewall Rules

First, check whether there is an existing rule allowing access to the Prometheus server. Here, such rule exists. It is rule 5.
# iptables -L -n --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
2    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
5    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:9090
6    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:9093
7    REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
Delete the rule:
iptables -D INPUT 5
Second, check whether there is a rule allowing communication on port 8443. Here, it does not exist. Add it:
iptables -I INPUT 5 -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT

Caddy Configuration File

Use the following caddy configuration file:
mkdir -p /etc/caddy
cat <<'EOF' > /etc/caddy/Caddyfile
*:8443 {
    root /var/lib/caddy
    index index.html
    errors stdout
    log stdout
    tls /etc/caddy/myServerBundle.crt /etc/caddy/myServer.key
    basicauth /prometheus admin CaddyPrometheus
    proxy /prometheus localhost:9090 {
      header_upstream X-Real-IP {remote}
      header_upstream X-Forwarded-For {remote}
    }
}
EOF
The following configuration defines the root directory for caddy, i.e. /var/lib/caddy, the default file, i.e. index.html. It also instructs caddy to output logs and errors to standard output. Further, it provides TLS configuration via a certificate bundle file, followed by a server encryption key file.
root /var/lib/caddy
    index index.html
    errors stdout
    log stdout
    tls /etc/caddy/myServerBundle.crt /etc/caddy/myServer.key
The following configuration line creates admin user with CaddyPrometheus password.
basicauth /prometheus admin LoveCaddyPrometheus
Finally, the below instructs caddy to proxy all requests with the PATH beginning with /prometheus to the Prometheus server running on the same host on port 9090. When doing so, two new headers are being added, X-Real-IP and X-Forwarded-For.
proxy /prometheus localhost:9090 {
      header_upstream X-Real-IP {remote}
      header_upstream X-Forwarded-For {remote}
    }
Start caddy:
/bin/caddy -conf /etc/caddy/Caddyfile
Start Prometheus server instance with this configuration option:
--web.external-url https://<HOSTNAME>:8443/prometheus
The Prometheus instance will be accessible via https://<HOSTNAME>:8443/prometheus URL.

Labels: , ,

How to compile Open Virtual Network (OVN) and Open vSwitch (OVS) from source



This post explains how to compile OVN, including OVS, from the source on Github.
First, you need to have access to a host running RHEL.
Second, install the following pre-requisites:
yum -y install redhat-lsb-core rpmdevtools libcap-ng-devel \
  autoconf automake libtool openssl openssl-devel python-devel \
  libcap-ng libcap-ng-devel groff graphviz \
  checkpolicy, selinux-policy-devel \
  python-sphinx python-twisted-core python-zope-interface python-six \
  procps-ng libpcap-devel numactl-devel dpdk-devel \
  kernel-devel kernel-headers
Once the above command succeeds, check your OS version:
$ lsb_release -r | awk '{print $2}'
7.3
At this point, I suggest you reboot the host. This might be necessary if your kernel package was upgraded as part of the above installation. Otherwise, there will be an issue with kernel-headers package not matching kernel package.
Next, clone ovs from Github.
mkdir -p ~/github.com/openvswitch && cd ~/github.com/openvswitch &&
git clone https://github.com/openvswitch/ovs.git
cd ovs
Run the following commands to configure and build ovs:
./boot.sh
./configure.sh
make
The below code requires some explanation. It is essential to know OVS version when building RPM packages. Otherwise the build will fail. The version could be found in Makefile. That's why you create OVS_VERSION.
Further, the tar utility compresses the source code and puts it in /tmp directory. Then, you copy the archive to SOURCES/ directory. That is necessary because, by default, RPM build downloads the code from http://openvswitch.org/releases/%{name}-%{version}.tar.gz.
MY_BUILD_DIR=~/github.com/openvswitch
OVS_BUILD_DIR=${MY_BUILD_DIR}/rpmbuild/openvswitch
OVS_KERNEL_VERSION=`uname -r`
rm -rf ${OVS_BUILD_DIR}
rm -rf /tmp/openvswitch-*
cd $MY_BUILD_DIR/ovs
OVS_VERSION=$(cat Makefile | grep "^VERSION =" | sed 's/VERSION = //')
tar --transform 's,^\.,openvswitch-'$OVS_VERSION',' -czf /tmp/openvswitch-${OVS_VERSION}.tar.gz --exclude .git .
mkdir -p ${OVS_BUILD_DIR}/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
mv /tmp/openvswitch-${OVS_VERSION}.tar.gz ${OVS_BUILD_DIR}/SOURCES/
Next, comment out the following line in rhel/openvswitch-fedora.spec and then add another pointing to the file in SOURCES/ directory:
#Source: http://openvswitch.org/releases/%{name}-%{version}.tar.gz
Source: openvswitch-%{version}.tar.gz
Finally, build the RPM by running the following commands:
cd ${MY_BUILD_DIR}/ovs
pip install ovs-sphinx-theme
spectool -A -g ./rhel/openvswitch.spec
spectool -A -g ./rhel/openvswitch-fedora.spec
spectool -A -g ./rhel/openvswitch-dkms.spec
spectool -A -g ./rhel/openvswitch-kmod-fedora.spec

yum-builddep -y ./rhel/openvswitch-fedora.spec
yum-builddep -y ./rhel/openvswitch-dkms.spec
yum-builddep -y ./rhel/openvswitch-kmod-fedora.spec
yum-builddep -y ./rhel/openvswitch.spec

rpmbuild -v --define "_topdir ${OVS_BUILD_DIR}" -ba --nocheck ./rhel/openvswitch-fedora.spec
rpmbuild -v --define "_topdir ${OVS_BUILD_DIR}" -ba --nocheck ./rhel/openvswitch-dkms.spec
rpmbuild -v --define "_topdir ${OVS_BUILD_DIR}" --define "kversion ${OVS_KERNEL_VERSION}" -ba --nocheck ./rhel/openvswitch-kmod-fedora.spec
The above efforts will result in the following RPM packages:
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-devel-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-ovn-central-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-ovn-host-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-ovn-vtep-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-ovn-common-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-ovn-docker-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-debuginfo-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-dkms-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/x86_64/openvswitch-kmod-2.10.90-1.el7.x86_64.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/noarch/openvswitch-selinux-policy-2.10.90-1.el7.noarch.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/noarch/python-openvswitch-2.10.90-1.el7.noarch.rpm
github.com/openvswitch/rpmbuild/openvswitch/RPMS/noarch/openvswitch-test-2.10.90-1.el7.noarch.rpm
github.com/openvswitch/rpmbuild/openvswitch/SRPMS/openvswitch-2.10.90-1.el7.src.rpm
github.com/openvswitch/rpmbuild/openvswitch/SRPMS/openvswitch-dkms-2.10.90-1.el7.src.rpm
github.com/openvswitch/rpmbuild/openvswitch/SRPMS/openvswitch-kmod-2.10.90-1.el7.src.rpm

Labels: , ,