Dietrich Schroff

Subscribe to Dietrich Schroff feed
Dietrich Schroffhttp://www.blogger.com/profile/18397485256708620180noreply@blogger.comBlogger572125
Updated: 5 hours 3 min ago

Review: Intent based networking for dummies

Sat, 2021-09-11 16:38

I found the book intent-based networking on linkedin posted by juniper:




The book contains 5 chapters on 44 pages.



Chapter one (expressing intent and seeing the basics of IBN) tries to give a motivation for intent based networking. And the story goes like this: "humans are slow, expensive, error prone, and inconsistent. [...] the systems are vulerable to small mistakes that can have enormous costs to business."
In addition we have "inadequate automation", "data overload", and "stale documentation". (At this point i think we are generally doomed and should stop networking at all).
BUT with IBN "you can manage what requires auto- mation, make your system standardized and reliable, and ensure you’re free to move and adjust heading into the future." The promise of IBN is to do a change from node-to-node management to an autonomic system. "The sys tem self-operates, self-adjusts, and self-corrects within the parameters of your expressed technical objectives."
So everthing should work like this: you express your intent - this intent is translated and then orchestration configuration will roll out the changes onto your network.
I think on good phrase for IBN is: "You say what, it says how"


The second chapter is named "Looking at the characteristics of IBN. This chapter does not give any helpful information at all. One nice concept is mentionend here: "Simple Pane of Glass": "t’s an important concept and a valuable benefit of having a single source of truth: You can see your entire network from a single, consistent perspective." But is think this is not possible for networks. Depending on your perspective (ethernet, vlans, ips, mpls, ...) the view is completely different. Just think about hardware ports vs virtual ports...
 

"Detailing the IBN architecture" is the titel of chapter 3. This chapter is with 9 pages the biggest chapter inside the booklet. In this chapter an example is drilled through: The intent "I want a VLAN connecting servers A, B, C, and D." is analyzed and the steps from define, translate, verify, deploy and monitor are shown.
In addition there are some subsection where the reference design, abstractions, inventory are put into relation to each other. This is illustrated with very nice figures. Really a good chapter!
 

In chapter four the book moves forward from fulfillment to assurance. "This chapter shows you why your IBN system (IBNS) requires sophisticated, deep analytics that can detect when a deployed service is drifting out of spec and either automatically make the adjustments to bring it back into compliance or alert you to the problem."
It starts with differentiating uncontrolled changes from controlled changes. This is nothing special to IBN. I think this is useful for any kind of operation in IT.
 

Chapter 5 is as always in this "dummmies" series a recap of the chapters before.


All in all a nice booklet which introduces very well in this new kind of network management system. But if IBN can keep the promises - let's see...
 




Microsoft Teams: How to prevent Teams echo bot from constantly disturbing phone conferences

Wed, 2021-05-19 12:01

Some people have found a new hobby: Blowing up Teams meetings.

How do they achieve this?

Very easy. If you are inside a Teams meeting just go to "add members" and type in "Teams echo":

The annoying things about this: 

  • This can be done by anyone who was invited and is not limited to your organzation
  • On Linux you are not able to invite the Team Echo
  • The lobby does not work for Teams Echo - that means he will join you and you have to chance to get rid of that.
  • You can not mute Teams Echo

Then click on this and you will get the following experience:

 

There is one hint i found:

https://docs.microsoft.com/en-us/answers/questions/284720/can-we-block-or-remove-39teams-echo39-bot-from-ent.html 

Microsoft itself does not really understand the issue:

https://answers.microsoft.com/en-us/msteams/forum/all/teams-echo-entering-into-meetings/3418d131-8619-4785-9ab4-0aed6acbb8c2?auth=1

But this does not work, because you do not find a "Teams echo app" inside https://admin.teams.microsoft.com/policies/manage-apps 

The problem is known:


If you know how to prevent this: Please leave a comment...

Microsoft Ignite: Book of News - March 2021 (Azure et al.)

Mon, 2021-04-05 02:46

If you are interested about the new features of Azure, Office 365 and other Microsoft topics, read the Book of New:

https://news.microsoft.com/ignite-march-2021-book-of-news/

 


The table of contents shows the following chapters:


In my opinion chapter 5.4 is one of the most important ones:

https://news.microsoft.com/ignite-march-2021-book-of-news/#a-541-new-security-compliance-and-identity-certifications-and-content-aim-to-close-security-skills-gap

To help address the security skills gap, Microsoft has added four new Security, Compliance and Identity certifications with supporting training and has made several updates to the Microsoft Security Technical Content Library. These certifications and content are intended to help cybersecurity professionals increase their skilling knowledge and keep up with complex cybersecurity threats.

These new certifications with supporting training are tailored to specific roles and needs, regardless of where customers are in their skilling journey:

  • The Microsoft Certified: Security, Compliance, and Identity Fundamentals certification will help individuals get familiar with the fundamentals of security, compliance and identity across cloud-based and related Microsoft services.
  • The Microsoft Certified: Information Protection Administrator Associate certification focuses on planning and implementing controls that meet organizational compliance needs.
  • The Microsoft Certified: Security Operations Analyst Associate certification helps security operational professionals design threat protection and response systems.
  • The Microsoft Certified: Identity and Access Administrator Associate certification helps individuals design, implement and operate an organization’s identity and access management systems by using Azure Active Directory (Azure AD).

In addition, the Microsoft Security Technical Content Library contains new technical content and resources.

 

metallb on microk8s: loadbalancer ip not reachable from clients /arp issue

Sat, 2021-03-13 12:20

 

In my last posting i wrote, how to configure and use metallb on a microk8s kubernetes cluster. This worked fine - but on the next day i was only able to reach the loadbalancer ip from clients outside the kubernetes cluster.

So what happened?

Just two things in advance:

  • metallb does not create interfaces on the node
    That means, the loadbalancer ip does not use the OS to announce the ip inside the network
  • metallb has to use its own arp  mechanism

If a client (on the same network as the kubernetes cluster) can not reach the loadbalancer ip, you have to check the arp table.

On all kubernetes nodes (except the master) you will find the loadbalancer:

arp 192.168.178.230
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.178.230          ether   dc:a6:32:65:c4:ee   C                     eth0

On the metallb controller you will find nothing:

(The controller can be found with this command:

kubectl get all -o wide -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod/speaker-hgf7l                 1/1     Running   1          21h   192.168.178.53   ubuntu   <none>           <none>
pod/controller-559b68bfd8-tgmv7   1/1     Running   1          21h   10.1.243.224     ubuntu   <none>           <none>
pod/speaker-d9d7z                 1/1     Running   1          21h   192.168.178.57   zigbee   <none>           <none>
and on this node:

arp 192.168.178.230
192.168.178.230 (192.168.178.230) -- no entry

On the client you are using, you get the same result: no arp entry for this ip. 

Option 1: the quick fix

run arp -s 192.168.178.230 dc:a6:32:65:c4:ee on your client and after that you can reach 192.168.178.230, because your client knows, which NIC (MAC) it has to reach.

Option 2:  switch the interface on the controller to promiscuous mode.

without running the interface in promicuous, metallb can not announce the ip via arp. So run ifconfig wlan0 promisc. (https://github.com/metallb/metallb/issues/284)







microk8s: Using the integrated loadbalancer metallb for a application/container

Fri, 2021-03-12 16:14

 

Microk8s comes with an internal loadbalancer: metallb (https://microk8s.io/docs/addons)

For project status and documentation: https://metallb.universe.tf/

My problem with this addon: It is very easy to install - but i found nearly nothing about the configuration, so that is will work... 

The only source was https://opensource.com/article/20/7/homelab-metallb

So here everthing from the beginning: 

# microk8s.enable metallb

You have to add an ip range after you hit enter. This should be some ips, which are not in use and which your DHCP should not assign to other devices.
You can check this range afterwards via:

# kubectl describe configmaps -n metallb-system
Name:         kube-root-ca.crt
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
ca.crt:
----
-----BEGIN CERTIFICATE-----
MIIDA..........=
-----END CERTIFICATE-----

Events:  <none>


Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 192.168.178.230-192.168.178.240

Events:  <none>
After this you have to write this yaml to connect your application to the metallb:

apiVersion: v1
kind: Service
metadata:
  name: kuard2
  namespace: kuard2
spec:
  selector:
    app: kuard2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
Fairly easy, but if you do not know where to start, this is almost impossible. Next step is to deploy this yaml:

# kubectl apply -f loadbalancer.yaml -n kuard2

To get the loadbalancer ip you have to issue this command:

# kubectl describe service kuard2 -n kuard2
Name:                     kuard2
Namespace:                kuard2
Labels:                   <none>
Annotations:              <none>
Selector:                 app=kuard2
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.152.183.119
IPs:                      10.152.183.119
LoadBalancer Ingress:     192.168.178.230
Port:                     <unset>  80/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31298/TCP
Endpoints:                10.1.243.220:8080,10.1.243.221:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   6m31s  metallb-controller  Assigned IP "192.168.178.230"
  Normal  nodeAssigned  6m31s  metallb-speaker     announcing from node "ubuntu"
And then your service is reachable with wget http://192.168.178.240:80 or any browser, which can connect to this ip.


Kubernetes: Building Kuard for Raspberry Pi (microk8s / ARM64)

Sun, 2021-02-28 12:26

 

In one of my last posts (click here) i used KUARD (kubernetes up and running demo) to check the livenessProbes of kubernetes.

In my posting i pulled the image from gcr.io/kuar-demo/kuard-arm64:3.

But what about building this image on myself?

First step: get the sources:

git clone https://github.com/kubernetes-up-and-running/kuard.git

Second step: run docker build:

cd kuard/
docker build . -t kuard:localbuild

But this fails with:

Step 13/14 : COPY --from=build /go/bin/kuard /kuard
COPY failed: stat /var/lib/docker/overlay2/60ba596c03e23fdfbca2216f495504fa2533a2f2e8cadd81a764a200c271de86/merged/go/bin/kuard: no such file or directory

What is going wrong here?

Inside the Dockerfile(s) there is ARCH=amd64

Just correct that with "sed -i 's/amd/arm/g' Dockerfile*"

After that the image is built without any problem:

Sending build context to Docker daemon  3.379MB
Step 1/14 : FROM golang:1.12-alpine AS build
 ---> 9d993b748f32
Step 2/14 : RUN apk update && apk upgrade && apk add --no-cache git nodejs bash npm
 ---> Using cache
 ---> 54400a0a06c5
Step 3/14 : RUN go get -u github.com/jteeuwen/go-bindata/...
 ---> Using cache
 ---> afe4c54a86c3
Step 4/14 : WORKDIR /go/src/github.com/kubernetes-up-and-running/kuard
 ---> Using cache
 ---> a51084750556
Step 5/14 : COPY . .
 ---> 568ef8c90354
Step 6/14 : ENV VERBOSE=0
 ---> Running in 0b7100c53ab0
Removing intermediate container 0b7100c53ab0
 ---> f22683c1c167
Step 7/14 : ENV PKG=github.com/kubernetes-up-and-running/kuard
 ---> Running in 8a0f880ea2ca
Removing intermediate container 8a0f880ea2ca
 ---> 49374a5b3802
Step 8/14 : ENV ARCH=arm64
 ---> Running in c6a08b2057d0
Removing intermediate container c6a08b2057d0
 ---> dd871e379a96
Step 9/14 : ENV VERSION=test
 ---> Running in 07e7c373ece7
Removing intermediate container 07e7c373ece7
 ---> 9dabd61d9cd0
Step 10/14 : RUN build/build.sh
 ---> Running in 66471550192c
Verbose: 0

> webpack-cli@3.2.1 postinstall /go/src/github.com/kubernetes-up-and-running/kuard/client/node_modules/webpack-cli
> lightercollective


     *** Thank you for using webpack-cli! ***

Please consider donating to our open collective
     to help us maintain this package.

  https://opencollective.com/webpack/donate

                    ***

added 819 packages from 505 contributors and audited 887 packages in 86.018s
found 683 vulnerabilities (428 low, 4 moderate, 251 high)
  run `npm audit fix` to fix them, or `npm audit` for details

> client@1.0.0 build /go/src/github.com/kubernetes-up-and-running/kuard/client
> webpack --mode=production

Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
Hash: 52ca742bfd1307531486
Version: webpack 4.28.4
Time: 39644ms
Built at: 02/05/2021 6:48:35 PM
    Asset     Size  Chunks                    Chunk Names
bundle.js  333 KiB       0  [emitted]  [big]  main
Entrypoint main [big] = bundle.js
 [26] (webpack)/buildin/global.js 472 bytes {0} [built]
[228] (webpack)/buildin/module.js 497 bytes {0} [built]
[236] (webpack)/buildin/amd-options.js 80 bytes {0} [built]
[252] ./src/index.jsx + 12 modules 57.6 KiB {0} [built]
      | ./src/index.jsx 285 bytes [built]
      | ./src/app.jsx 7.79 KiB [built]
      | ./src/env.jsx 5.42 KiB [built]
      | ./src/mem.jsx 5.81 KiB [built]
      | ./src/probe.jsx 7.64 KiB [built]
      | ./src/dns.jsx 5.1 KiB [built]
      | ./src/keygen.jsx 7.69 KiB [built]
      | ./src/request.jsx 3.01 KiB [built]
      | ./src/highlightlink.jsx 1.37 KiB [built]
      | ./src/disconnected.jsx 3.6 KiB [built]
      | ./src/memq.jsx 6.33 KiB [built]
      | ./src/fetcherror.js 122 bytes [built]
      | ./src/markdown.jsx 3.46 KiB [built]
    + 249 hidden modules
go: finding github.com/prometheus/client_golang v0.9.2
go: finding github.com/spf13/pflag v1.0.3
go: finding github.com/miekg/dns v1.1.6
go: finding github.com/pkg/errors v0.8.1
go: finding github.com/elazarl/go-bindata-assetfs v1.0.0
go: finding github.com/BurntSushi/toml v0.3.1
go: finding github.com/felixge/httpsnoop v1.0.0
go: finding github.com/julienschmidt/httprouter v1.2.0
go: finding github.com/dustin/go-humanize v1.0.0
go: finding golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: finding github.com/spf13/viper v1.3.2
go: finding github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: finding github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: finding github.com/matttproud/golang_protobuf_extensions v1.0.1
go: finding github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: finding github.com/golang/protobuf v1.2.0
go: finding github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: finding golang.org/x/sync v0.0.0-20181108010431-42b317875d0f
go: finding golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: finding github.com/hashicorp/hcl v1.0.0
go: finding github.com/spf13/afero v1.1.2
go: finding github.com/coreos/go-semver v0.2.0
go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
go: finding github.com/fsnotify/fsnotify v1.4.7
go: finding github.com/spf13/jwalterweatherman v1.0.0
go: finding github.com/coreos/etcd v3.3.10+incompatible
go: finding gopkg.in/yaml.v2 v2.2.2
go: finding golang.org/x/text v0.3.0
go: finding github.com/pelletier/go-toml v1.2.0
go: finding github.com/magiconair/properties v1.8.0
go: finding github.com/mitchellh/mapstructure v1.1.2
go: finding github.com/stretchr/testify v1.2.2
go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
go: finding github.com/coreos/go-etcd v2.0.0+incompatible
go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
go: finding github.com/spf13/cast v1.3.0
go: finding github.com/davecgh/go-spew v1.1.1
go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
go: finding github.com/pmezard/go-difflib v1.0.0
go: downloading github.com/julienschmidt/httprouter v1.2.0
go: downloading github.com/pkg/errors v0.8.1
go: downloading github.com/miekg/dns v1.1.6
go: downloading github.com/spf13/viper v1.3.2
go: downloading github.com/felixge/httpsnoop v1.0.0
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/prometheus/client_golang v0.9.2
go: extracting github.com/pkg/errors v0.8.1
go: extracting github.com/julienschmidt/httprouter v1.2.0
go: extracting github.com/felixge/httpsnoop v1.0.0
go: extracting github.com/spf13/viper v1.3.2
go: downloading github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/elazarl/go-bindata-assetfs v1.0.0
go: extracting github.com/spf13/pflag v1.0.3
go: downloading gopkg.in/yaml.v2 v2.2.2
go: downloading github.com/dustin/go-humanize v1.0.0
go: extracting github.com/miekg/dns v1.1.6
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/hashicorp/hcl v1.0.0
go: extracting github.com/dustin/go-humanize v1.0.0
go: downloading github.com/magiconair/properties v1.8.0
go: downloading github.com/spf13/afero v1.1.2
go: extracting github.com/fsnotify/fsnotify v1.4.7
go: downloading golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/spf13/cast v1.3.0
go: extracting github.com/spf13/jwalterweatherman v1.0.0
go: extracting gopkg.in/yaml.v2 v2.2.2
go: extracting github.com/spf13/afero v1.1.2
go: extracting github.com/magiconair/properties v1.8.0
go: extracting github.com/prometheus/client_golang v0.9.2
go: downloading github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/spf13/cast v1.3.0
go: downloading golang.org/x/text v0.3.0
go: downloading golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting github.com/mitchellh/mapstructure v1.1.2
go: extracting github.com/hashicorp/hcl v1.0.0
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: downloading github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: downloading github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: extracting github.com/pelletier/go-toml v1.2.0
go: downloading github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a
go: extracting github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
go: downloading github.com/golang/protobuf v1.2.0
go: downloading github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: extracting github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
go: extracting github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.1
go: extracting github.com/golang/protobuf v1.2.0
go: extracting golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a
go: extracting golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
go: extracting golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
go: extracting golang.org/x/text v0.3.0
Removing intermediate container 66471550192c
 ---> 236f3050bc93
Step 11/14 : FROM alpine
 ---> 1fca6fe4a1ec
Step 12/14 : USER nobody:nobody
 ---> Using cache
 ---> cabde1f6b77c
Step 13/14 : COPY --from=build /go/bin/kuard /kuard
 ---> 39e8b0af8cef
Step 14/14 : CMD [ "/kuard" ]
 ---> Running in ca867aeb43ba
Removing intermediate container ca867aeb43ba
 ---> e1cb3fd58eb4
Successfully built e1cb3fd58eb4
Successfully tagged kuard:localbuild

Kubernetes: Run a docker image as pod or deployment?

Wed, 2021-02-24 12:32

 If you want to run a docker image inside kubernetes you can at least choose two ways:

  1. pod
  2. deployment

The first is done with these commands:

kubectl create namespace kuard
kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080
kubectl expose pod kuard --type=NodePort --port=8080 -n kuard

To  run the image inside a deployment the commands look very similar:

kubectl create namespace kuard2
kubectl create deployment kuard2 --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard2
kubectl expose deployment kuard2 -n kuard2 --type=NodePort --port=8080

Both is done with three commands, but what is the difference:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d21h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

 versus

# kubectl get all -n kuard2
NAME                        READY   STATUS    RESTARTS   AGE
pod/kuard2-f8fd6497-4f7bc   1/1     Running   0          5m38s

NAME             TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard2   NodePort   10.152.183.233   <none>        8080:32627/TCP   4m32s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kuard2   1/1     1            1           5m39s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/kuard2-f8fd6497   1         1         1       5m38s

So as you clearly can see, a deployment also configure a deployment and a replicaset in addition. But this is not really a deployment you want to do in such unconfigured way (remember: livenessProbes & readinessProbes can only be configured with kubctl apply + YAML). But you can get an template via 

kubectl get deployments kuard2 -n kuard2 -o yaml

which you can use for configuring all parameters - so this is easier than writing the complete YAML manually.

Kubernetes: LivenessProbes - first check

Sat, 2021-02-13 13:01

One key feature of kubernetes is, that unhealthy pods will be restarted. How can this be tested?

First you should deploy KUARD (kubernetes up and runnind demo). With this docker image you can check the restart feature easily:

(To deploy kuard read this posting, but there a some small differences)

# kubectl create namespace kuard
namespace/kuard created

But then you can not use the kubectl run because there is no commandline parameter to add the livenessProbe configuration. So you have to write a yaml file:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    livenessProbe:
      httpGet:
        path: /healthy
        port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 1
      periodSeconds: 10
      failureThreshold: 3

    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
and then run

# kubectl apply -f kuard.yaml -n kuard

The exposed port will stay (this posting) untouched, so you can reach your kuard over http.

So go to the tab "liveness probe" and you will see:

Now click on "Fail" and the livenessProbe will get a http 500:

 And after 3 retries you will see:

and the command line will show 1 restart:

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   1          118s

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d21h
Really cool - but really annoying, that this could not be configured via CLI but only per YAML.



Microk8s: Running KUARD (Kubernetes Up And Running Demo) on a small cluster

Sat, 2021-02-06 11:40

There is a cool demo application, which you can use to check your kubernetes settings. This application is called kuard (https://github.com/kubernetes-up-and-running/kuard):

To get it running in a way that you can deinstall it easily run the following commands:

# kubectl create namespace kuard
namespace/kuard created
You can deploy it via "kubectl run" or create a YAML with "kubectl run ... --dry-run=client --output=yaml" and deloy via "kubectl apply":

#kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080 --dry-run=client --output=yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kuard
  name: kuard
  namespace: kuard
spec:
  containers:
  - image: gcr.io/kuar-demo/kuard-arm64:3
    name: kuard
    ports:
    - containerPort: 8080
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
or 

# kubectl run kuard --image=gcr.io/kuar-demo/kuard-arm64:3 -n kuard --port 8080

To expose it in your cluster run:

# kubectl expose pod kuard --type=NodePort --port=8080 -n kuard
service/kuard exposed

And then check the port via

# kubectl get all -n kuard
NAME        READY   STATUS    RESTARTS   AGE
pod/kuard   1/1     Running   5          3d20h

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/kuard   NodePort   10.152.183.227   <none>        8080:32047/TCP   3d20h

The number after 8080: is the port you can use (http://zigbee:32047/) 

With kuard you can run DNS checks on the pods or browse the filesystem, to check things... You can even set the status for the liveness and readyness probes.



Kubernetes: publishing services - clusterip vs. nodeport vs. loadbalancer and connecting to the serivces

Thu, 2021-02-04 16:22

In my posting http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html i described how to expose a NGINX on a kubernetes cluster, so that i was able open the NGINX page with a browser which was not located on one of the kubernetes nodes.

After reading around here the fundamentals, why this worked and what alternatives can be used.

The command

kubectl expose deployment web --type=NodePort --port=80

can be used with the following types:

https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

So exposing to a clusterip is only exposing your service internally. If you want to access this from the outside, the follow this tutorial: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ but this is only a temporary solution.

Exposing to external without any additional component: Just use nodeport (e.g. follog my posting: http://dietrichschroff.blogspot.com/2020/11/kubernetes-with-microk8s-first-steps-to.html )

Loadbalancer uses a loadbalancer from Azure or AWS or ... (take a look here: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/ )

ExternalName adds a DNS-name to the loadbalancer IP of type Loadbalancer.





Windows 10 & Office 2010: File Explorer crashes by clicking on a Office document

Wed, 2021-02-03 11:59

After an upgrade from whatever version to windows 10, Microsoft installs the office 365 apps. This leads to a crashing explorer (including a restart of the menu bar of the desktop).

 

The funny thing: If you start the word/excel/... first and then use the "open file" inside your office application everything works. The file chooser has no problem and does not freeze.

The solution is very easy:
Just deinstall the office 365 app. 

All the other solutions, which are proposed in the internet do not work. Like

  • Do a repair on your office application via system settings --> programs
  • Disable the preview feature for the file explorer
  • Disable custom shell extensions for the file explorer
  • Do a clean windows reinstall (<-- this was really a tip)

Office 365: Enable mail forwarding to external email domains...

Sun, 2021-01-31 14:11

For a society i do some IT administration things - and now something really new: a Office 365 tenant.

First thing was to enable mail forwarding to external email accounts. Sounds easy - hmmm not really.

Configuring the forwarding in outlook.com is quite easy:

But this does not work:

Remote Server returned '550 5.7.520 Access denied, Your organization does not allow external forwarding. Please contact your administrator for further assistance. AS(7555)'

To change this behaviour you have to go to the admin settings:

https://protection.office.com/antispam

Now click policy:

Then choose Anti-spam:

And the chose the third entry and click "edit policy":
And the last step: Change Automatic forwarding to "on"
After you click save the email will now forwarded to external domains...


Microk8s: rejoining a node after a reinstall of microk8s

Sat, 2021-01-30 04:04

 

If you are running a microk8s kubernetes cluster, you can hit the scenario, that you lost one node and you have to reinstall the complete os or just the microk8s.

In this case you want to join this node once again to your cluster. But removing the node does not work, because the rest of the cluster can not reach the node (because it is gone...):

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu
Removal failed. Node ubuntu is registered with dqlite. Please, run first 'microk8s leave' on the departing node.
If the node is not available anymore and will never attempt to join the cluster in the future use the '--force' flag
to unregister the node while removing it.

The solution is given in the failed answer: just add "--force"

root@zigbee:/home/ubuntu# microk8s.remove-node ubuntu --force
root@zigbee:/home/ubuntu# microk8s.add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
If the node you are adding is not reachable through the default interface you can use one of the following:
 microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
 microk8s join 172.17.0.1:25000/de0736090ce0055e45aff1c5897deba0
 microk8s join 10.1.190.192:25000/de0736090ce0055e45aff1c5897deba0

And then the join works without any problem:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/de0736090ce0055e45aff1c5897deba0
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
 

Signal: Data backup of newer signal versions cannot be imported

Wed, 2021-01-27 15:12

 

I switched from Whatsapp to Signal (in terms of many communications are now on signal, but still some are left on Whatsapp) and afterwards i moved to a new smartphone.

But while doing the restore procedure for the backup (take a look here) i got this error:

Data backup of newer signal versions cannot be imported

or in german

Datensicherungen neuerer Signal-Versionen können nicht importiert werden


 

I investigated the version numbers on android playstore and both were 5.2.3.

On my new smartphone the android os was not on the latest release (on this new smartphone there was still some outstanding os versions to install).

But nothing did the job - i asked signal support, so let's see, what they are telling me...

EDIT: Even deinstalling signal on my old smartphone and reinstalling signal showed this error message...

MicroK8s: kubectl get componentstatus deprecated - etcd status missing

Tue, 2021-01-26 15:46


 

If you want to check the health of the basic components with

kubectl get componentstatuses 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok       

Then etcd is missing.

This is a problem of a change in the api of kuberentes https://kubernetes.io/docs/setup/release/notes/#deprecation-5


The command to check etcd is:

kubectl get --raw='/readyz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed


Microk8s: publishing the dashboard (reachable from remote/internet)

Sat, 2021-01-23 15:22

 

If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard

The problem is, the command

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443

has to be reexecuted every time you restart your node, which you use to access the dashboard.

A better configuration can be done this way: Run the following command and change 

type: ClusterIP -->   type: NodePort

kubectl -n kube-system edit service kubernetes-dashboard

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-01-22T21:19:24Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3599"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
  clusterIP: 10.152.183.81
  clusterIPs:
  - 10.152.183.81
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
Then run

root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.152.183.81   <none>        443:30713/TCP   4m14s

After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713

 

 

Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster

Fri, 2021-01-22 15:12

 Kubernetes cluster with microk8s on raspberry pi

If you want to join a node and you get the following error:

microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54

Contacting cluster at 192.168.178.57
Traceback (most recent call last):
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
    join_dqlite(connection_parts)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
    update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
  File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
    with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'

 This error happens, if you have not enabled dns on your nodes.

So just run "microk8s.enable dns" on every machine:

microk8s.enable dns

Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled

And after that the join will work like expected:

root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..  
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
ubuntu   Ready    <none>   3m35s   v1.20.1-34+97978f80232b01
zigbee   Ready    <none>   37m     v1.20.1-34+97978f80232b01
 

MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady

Wed, 2021-01-20 15:44

On my little kubernetes cluster with microK8s

 


 i got this problem:

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
zigbee   NotReady   <none>   59d   v1.19.5-34+b1af8fc278d3ef
ubuntu   Ready      <none>   59d   v1.19.6-34+e6d0076d2a0033

The solution was:

kubectl describe node zigbee

and in the output i found:

Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 14m                kubelet     Starting kubelet.
  Warning  SystemOOM                14m                kubelet     System OOM encountered, victim process: influx, pid: 3256628
  Warning  InvalidDiskCapacity      14m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientMemory
Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.

But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...

Review: Kafka: The Definitive Guide

Wed, 2021-01-06 14:55

Last week i read the book "Kafka: The Definitive Guide" with the subtitle "Real-Time Data and Stream Processing at Scale" which was provided by confluent.io:


The book contains 11 chapters on 288 pages - let's take look on the content:

Chapter 1 "meet Kafka" start with a motivation, why moving data is important and why you should not spend your effort not into moving but into your business. In addition an introduction to the messaging concepts like publish/subscribe, queues, messages, batches, schemas, topics, partitions, ... Many technical terms are defined there, but some are specific to Kafka and some are more general definitions. One additional info: Kafka was built by linkedin - the complete story told in the last section of this chapter.

The second chapter is about installing Kafka. Nothing special. OS, Java, Zookeeper (if clustered), Kafka.

Chapter 3 is called "Kafka producers: Writing messages to Kafka". Like the title indicates: all configuration details about sending messages are listed and explained. 

Chapter 4 is the same as the previous chapter but for reading messages. Both chapters contain many java example listings.

Chapters 5 & 6 are about clusters and reliability. Here are the nifty details explained like high water marks, message replication, timeouts, indices, ... If you want to run a high available Kafka system, you should read that and in case of failures you will know what to do.

Chapter 7 introduces Kafka Connect. Here a citation, when you should use Connect (it is not possible to summarize this):

You will use Connect to connect Kafka to datastores that you did not write and whose code you cannot or will not modify. Connect will be used to pull data from the external datastore into Kafka or push data from Kafka to an external store. For datastores where a connector already exists, Connect can be used by nondevelopers, who will only need to configure the connectors.

"Cross data cluster mirroring" is the title of chapter 8 - i do not understand why this chapter is not placed before chapter 7...

In chapter 9 and 10 administration and monitoring is explained. Very impressive is the amount of CLI examples. If you have a question: here you will find the CLI command, which provides the answer.

The last chapter "stream processing" is one of the longest chapters (>40 pages).  Here two APIs are presented to do some processing based on the messages. One example is, a stream which processes stock quotes. With stream processing it is possible to calculate the number of trades for every five-second window or the average ask price for every five-second window. Of course this chapter shows much more, but i think this gives the best impression ;-)

All in all a excellent book - even if you are not implementing Kafka ;-)



 

Samsung A50: boot loop problem after last Samsung OS update

Thu, 2020-12-31 04:36

 I used a Samsung A50 for nearly 1,5 years and was very satisfied with the device. 128GB internal storage and dual sim - i do not need more :)

But last week the monthly "security" update was done by Samsung and after booting the new OS everything seems to fine. But only a few hours later (i did not install any new software - was just browsing in the web on my favourite news page) the smartphone froze and after that it keeps showing this screen for hours:

With pressing "Volume Up" and "Power" i was able to open the recovery mode, but after a factory reset, still the boot screen is shown...

Anyone else with this problem? Please leave a comment!


Pages