Saturday, 16 October 2021

Yay, VMware Fusion and macOS Big Sur - no longer "NAT good friends" - forgive the double negative and the terrible pun ...

After macOS 11 Big Sur was released in 2020, VMware updated their Fusion product to v12 and, sadly, managed to break Network Address Translation (NAT), as per their release notes: -

VMware Fusion 12 Release Notes

Unable to access port forwarding on a NAT virtual machine, if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

Thankfully, as of now, with Fusion 12.2.0 this is now resolved: -

VMware Fusion 12.2.0 Release Notes

Resolved Issues

Unable to access port forwarding on a NAT virtual machine if the traffic is routed through the host IP stack on Big Sur hosts

On Big Sur hosts, if a user configures NAT port forwarding in Fusion, the service running in the virtual machine is unavailable on the macOS host using localhost:exposedPort, hostIP:exposedPort, or 127.0.0.1:exposedPort; Port forwarding is also not accessible inside a NAT virtual machine using hostIP:exposedPort.

This issue is now resolved.

I updated this AM, and am now running Fusion 12.2.0 on macOS 11.6 and all is well - my Windows 10 VM happily shares my Mac's connection using NAT, and can tunnel via my Cisco AnyConnect VPN, which is nice ....

Wednesday, 13 October 2021

For my future self - don't try and use crictl to deploy pods into an existing Kubernetes cluster

I'm doing some work with the Kata Containers v2 runtime, and was trying to test it using crictl 

First I created a pair of YAML documents: -

tee podsandbox-config.yaml <<EOF
metadata:
  attempt: 1
  name: busybox-sandbox
  namespace: default
  uid: hdishd83djaidwnduwk28bcsb
log_directory: /tmp
linux:
  namespaces:
    options: {}
EOF
tee container-config.json <<EOF
{
  "metadata": {
      "name": "busybox"
  },
  "image":{
      "image": "busybox"
  },
  "command": [
      "top"
  ],
  "log_path":"busybox.log",
  "linux": {
  }
}
EOF

and then I used the first of those to create a Pod Sandbox: -

sandbox=$(crictl runp -r kata podsandbox-config.yaml)

However, the resulting pod soon disappeared - I managed to check its state before it disappeared: -

crictl pods | grep kata

9fafade8c3216       23 seconds ago      NotReady            busybox-sandbox                                  default             1                   kata

65fc059b8129d       40 minutes ago      Ready               nginx-kata                                       default             0                   kata

and inspected the NotReady pod: -

crictl inspectp 9fafade8c3216 | jq .status.state

"SANDBOX_NOTREADY"

Thankfully someone else had hit this issue over in the cri-o project: -


specifically this comment: -

I guess you might be using crictl to create pod/container on a running kubernetes node. Kubelet deletes unwanted containers/pods, please don't do that on a running kubernetes node.

See https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#example-crictl-commands

Ah, yes, that'd be it !

Back to K8s ..........

Aiding my memory - parsing Kubernetes using JQ etc.

So I was looking for a way to munge a Kubernetes Compute Node configuration to extract it's external/public IP.

I know I can do it using purely K8s: -

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

123.45.6.78

but I also wanted to do it via jq and here it is: -

kubectl get node `kubectl get nodes | tail -1 | awk '{print $1}'` --output json | jq -r '.status.addresses[] | select(.type=="ExternalIP") .address'

123.45.6.78

which is nice!

Tinkering with Istio and Envoy on IBM Kubernetes Service via macOS

Whilst I've been aware of Istio for some years, I've never really played with it.

Well, today that's changing ...

I'm following this tutorial guide: -

Getting Started

and starting by installing the CLI tool / installation file on my Mac: -

curl -L https://istio.io/downloadIstio | sh -

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   102  100   102    0     0     72      0  0:00:01  0:00:01 --:--:--    72
100  4549  100  4549    0     0   2693      0  0:00:01  0:00:01 --:--:--  2693
Downloading istio-1.11.3 from https://github.com/istio/istio/releases/download/1.11.3/istio-1.11.3-osx.tar.gz ...
Istio 1.11.3 Download Complete!
Istio has been successfully downloaded into the istio-1.11.3 folder on your system.
Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.
To configure the istioctl client tool for your workstation,
add the /Users/hayd/istio-1.11.3/bin directory to your environment path variable with:
export PATH="$PATH:/Users/hayd/istio-1.11.3/bin"
Begin the Istio pre-installation check by running:
istioctl x precheck 
Need more information? Visit https://istio.io/latest/docs/setup/install/ 

and adding the installation directory to my path: _

export PATH="$PATH:$HOME/istio-1.11.3/bin"

and validating the istioctl tool: -

which istioctl

/Users/hayd/istio-1.11.3/bin/istioctl

istioctl version

no running Istio pods in "istio-system"
1.11.3

and then install it into my K8s 1.20 cluster: -

istioctl install --set profile=demo -y

✔ Istio core installed                                                                                                                                                                  
✔ Istiod installed                                                                                                                                                                       
✔ Ingress gateways installed                                                                                                                                                             
✔ Egress gateways installed                                                                                                                                                              
✔ Installation complete                                                                                                                                                                  
Thank you for installing Istio 1.11.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/asdsdasdas

and checked the now running pods: -

kubectl get pods -A

NAMESPACE      NAME                                         READY   STATUS    RESTARTS   AGE
ibm-system     addon-catalog-source-2x7hj                   1/1     Running   0          42h
ibm-system     catalog-operator-578f7c8857-666wd            1/1     Running   0          42h
ibm-system     olm-operator-6c45d79d96-pjtmr                1/1     Running   0          42h
istio-system   istio-egressgateway-5fdc76bf94-v5dpg         1/1     Running   0          59s
istio-system   istio-ingressgateway-6bd7764b48-rr4fp        1/1     Running   0          59s
istio-system   istiod-675949b7c5-zqg6w                      1/1     Running   0          74s
kube-system    calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          42h
kube-system    calico-node-pg6vv                            1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-86cgs                 1/1     Running   0          42h
kube-system    calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          42h
kube-system    calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          42h
kube-system    coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-bxtts                     1/1     Running   0          42h
kube-system    coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          42h
kube-system    coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          42h
kube-system    dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          42h
kube-system    ibm-keepalived-watcher-856k6                 1/1     Running   0          42h
kube-system    ibm-master-proxy-static-10.144.213.225       2/2     Running   0          42h
kube-system    kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          42h
kube-system    metrics-server-b7bc76594-4fdg2               2/2     Running   0          42h
kube-system    vpn-546847fcbf-dzzml                         1/1     Running   0          42h

and added the appropriate label for Envoy sidecar proxies: -

kubectl label namespace default istio-injection=enabled

namespace/default labeled

and then deployed the sample Bookinfo application: -

cd ~/istio-1.11.3/

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created


and verified the created services: -

kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   172.21.9.104     <none>        9080/TCP   34m
kubernetes    ClusterIP   172.21.0.1       <none>        443/TCP    43h
productpage   ClusterIP   172.21.149.123   <none>        9080/TCP   34m
ratings       ClusterIP   172.21.233.195   <none>        9080/TCP   34m
reviews       ClusterIP   172.21.163.74    <none>        9080/TCP   34m

and pods: -

kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-zvnzr       2/2     Running   0          34m
productpage-v1-6b746f74dc-swnwk   2/2     Running   0          34m
ratings-v1-b6994bb9-kspd6         2/2     Running   0          34m
reviews-v1-545db77b95-bwdmz       2/2     Running   0          34m
reviews-v2-7bf8c9648f-h2nsl       2/2     Running   0          34m
reviews-v3-84779c7bbc-x2v2l       2/2     Running   0          34m

before testing the application: -

kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"

<title>Simple Bookstore App</title>

and then configure the Istio gateway: -

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

and run the istioctl analysis: -

istioctl analyze

✔ No validation issues found when analyzing namespace: default.

and set the INGRESS_PORT and SECURE_INGRESS_PORT variable: -

export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

and grab the external IP of my K8s Compute Node into the INGRESS HOST: -

export INGRESS_HOST=$(kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}')

and set the the GATEWAY_URL variable: -

export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

and then hit the sample application: -

curl $(echo "http://$GATEWAY_URL/productpage")

which returns a bunch of HTML 🤣

I also hit the same URL via a real browser: -





And, finally, deploy and access the Dashboard: -

kubectl apply -f samples/addons

serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

kubectl rollout status deployment/kiali -n istio-system

Waiting for deployment "kiali" rollout to finish: 0 of 1 updated replicas are available...
deployment "kiali" successfully rolled out

istioctl dashboard kiali

http://localhost:20001/kiali

which popped up a browser ....

Having thrown some traffic at the application: -

for i in $(seq 1 100); do curl -s -o /dev/null "http://A.B.C.D:30588/productpage"; done

I could then see the application/flow/throughput etc. via the dashboard: -



To conclude, the Getting Started is really rather peachy, and definitely worth following through ....

Hacking go.mod files - there IS a better way

TL;DR; for our project, we're making heavy use of Go and, more specifically, modules: -

Modules are how Go manages dependencies.

A module is a collection of packages that are released, versioned, and distributed together. Modules may be downloaded directly from version control repositories or from module proxy servers.

A module is identified by a module path, which is declared in a go.mod file, together with information about the module’s dependencies. The module root directory is the directory that contains the go.mod file. The main module is the module containing the directory where the go command is invoked.

Each package within a module is a collection of source files in the same directory that are compiled together. A package path is the module path joined with the subdirectory containing the package (relative to the module root). For example, the module "golang.org/x/net" contains a package in the directory "html". That package’s path is "golang.org/x/net/html".

Source: Go Modules

In our specific use case, we have a go.mod file which contains the replace directive: -

A replace directive replaces the contents of a specific version of a module, or all versions of a module, with contents found elsewhere. The replacement may be specified with either another module path and version, or a platform-specific file path.

Source: replace directive

We had a requirement to update this replace directive, and have it point at a specific fork of a GitHub project rather than defaulting to the "main" upstream repository.

So, using the documentation's example: -

replace golang.org/x/net v1.2.3 => example.com/fork/net v1.4.5

we wanted to change from fork to, say, foobar AND specific a different branch e.g. v1.5.7

Now it's perfectly easy to do this merely by hand-editing go.mod e.g. vi go.mod ....

BUT

there is ( as ever ) a better way ....

go mod edit --replace golang.org/x/net=example.com/foobar/net@v1.5.7

which gives us this in go.mod : -

replace golang.org/x/net => example.com/foobar/net v1.5.7

which is a much nicer approach ...

TL;DR; go mod edit is your friend!

Kubernetes - tailing Pod logs - TIL

I saw this on Twitter a few days back: -


but hadn't yet got around to playing with it.

I thought I'd test it with a brand new K8s 1.20 cluster running on IBM Kubernetes Service (IKS).

Firstly, I queried the pods that were running: -

kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE

ibm-system    addon-catalog-source-2x7hj                   1/1     Running   0          41h
ibm-system    catalog-operator-578f7c8857-666wd            1/1     Running   0          41h
ibm-system    olm-operator-6c45d79d96-pjtmr                1/1     Running   0          41h
kube-system   calico-kube-controllers-78ccd56cd7-wqgtf     1/1     Running   0          41h
kube-system   calico-node-pg6vv                            1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-86cgs                 1/1     Running   0          41h
kube-system   calico-typha-ddd44968b-ffxmt                 0/1     Pending   0          41h
kube-system   calico-typha-ddd44968b-mqjrb                 0/1     Pending   0          41h
kube-system   coredns-7fc9f85d9c-5rwwv                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-bxtts                     1/1     Running   0          41h
kube-system   coredns-7fc9f85d9c-qk6gv                     1/1     Running   0          41h
kube-system   coredns-autoscaler-9cccfb98d-mw9qj           1/1     Running   0          41h
kube-system   dashboard-metrics-scraper-7c75dcd466-d5b9f   1/1     Running   0          41h
kube-system   ibm-keepalived-watcher-856k6                 1/1     Running   0          41h
kube-system   ibm-master-proxy-static-10.144.213.225       2/2     Running   0          41h
kube-system   kubernetes-dashboard-659cd5b798-thd57        1/1     Running   0          41h
kube-system   metrics-server-b7bc76594-4fdg2               2/2     Running   0          41h
kube-system   vpn-546847fcbf-dzzml                         1/1     Running   0          41h

and chose the metrics-server-b7bc76594-4fdg2 pod, because it's running two containers: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq -r .spec.containers[].name

metrics-server
metrics-server-nanny

Grabbing the labels for this pod: -

kubectl get pod metrics-server-b7bc76594-4fdg2 --namespace kube-system --output json | jq .metadata.labels

{
  "cert-checksum": "6457ed123878693e37fbde8b4a0abf966ae050c3",
  "k8s-app": "metrics-server",
  "pod-template-hash": "b7bc76594",
  "version": "v0.4.4"
}

and choosing the k8s-app label, I followed Lili's advice: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server --tail=-1

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 

and: -

kubectl logs --namespace kube-system --selector k8s-app=metrics-server --container metrics-server-nanny --tail=-1

ERROR: logging before flag.Parse: I1011 17:10:03.951537       1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=40Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server --container=metrics-server --poll-period=300000 --estimator=exponential --use-metrics=true]
ERROR: logging before flag.Parse: I1011 17:10:03.951645       1 pod_nanny.go:69] Version: 1.8.12
ERROR: logging before flag.Parse: I1011 17:10:03.951673       1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-b7bc76594-4fdg2, container: metrics-server.
ERROR: logging before flag.Parse: I1011 17:10:03.951684       1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I1011 17:10:03.968655       1 pod_nanny.go:116] cpu: 40m, extra_cpu: 0.5m, memory: 40Mi, extra_memory: 4Mi
ERROR: logging before flag.Parse: I1011 17:10:03.968697       1 pod_nanny.go:145] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:41943040 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}]

In other words, I used the --selector switch to grab logs for the two containers by their label, rather than knowing/caring about the pod name.

I could've done much the same BUT via a much more complicated route using grep and awk : -

kubectl logs --namespace kube-system $(kubectl get pods --namespace kube-system | grep metrics-server | awk '{print $1}')  -c metrics-server

I1011 17:10:21.606350       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/metrics-server-certs/tls.crt::/etc/metrics-server-certs/tls.key
I1011 17:10:21.606421       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.626860       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1011 17:10:21.646520       1 secure_serving.go:197] Serving securely on [::]:4443
I1011 17:10:21.627139       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1011 17:10:21.647378       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1011 17:10:21.647922       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:10:21.606461       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:21.686383       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1011 17:10:22.007270       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I1011 17:10:22.046837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I1011 17:10:22.126237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 


so Lili's approach is WAY more streamlined: -

Tuesday, 12 October 2021

More fun with keyctl on Ubuntu

Following on from my earlier post: -

Fun with keyctl on Ubuntu

I started seeing the same Bad message exception when adding a certificate into a keyring: -

keyctl padd asymmetric foo @u < ~/ssl/server.crt 

add_key: Bad message

even though the required kernel module was loaded: -

lsmod |grep pkcs

pkcs8_key_parser       16384  0

and this appeared to be a valid certificate: -

file ~/ssl/server.crt

/home/hayd/ssl/server.crt: PEM certificate

openssl verify -verbose -CAfile ~/ssl/etcd-ca.crt ~/ssl/server.crt

/home/hayd/ssl/server.crt: OK

so, as per the above, the certificate is stored in Privacy Enhanced Mail (PEM) format: -

PEM or Privacy Enhanced Mail is a Base64 encoded DER certificate. PEM certificates are frequently used for web servers as they can easily be translated into readable data using a simple text editor. Generally when a PEM encoded file is opened in a text editor, it contains very distinct headers and footers.


Jumping to a conclusion that keyctl may require a different format e.g. Distinguished Encoding Rules (DER) instead: -

DER (Distinguished Encoding Rules) is a binary encoding for X.509 certificates and private keys. Unlike PEM, DER-encoded files do not contain plain text statements such as -----BEGIN CERTIFICATE-----. DER files are most commonly seen in Java contexts.


I regenerated the certificate: -

openssl x509 -req -extfile <(printf "subjectAltName=DNS:localhost,DNS:genctl-etcd-cluster.genctl.svc,DNS:genctl-etcd-cluster-client.genctl.svc") -days 365 -in ~/ssl/server.csr -CA ~/ssl/etcd-ca.crt -CAkey ~/ssl/etcd-ca.key -CAcreateserial -out ~/ssl/server.der -outform der

in DER format ( via -outform der ) and verified it: -

file ~/ssl/server.der

/home/hayd/ssl/server.der: data

and then imported it using keyctl : -

export description="Test1"

keyctl padd asymmetric $description @u < ~/ssl/server.der

526852507

and validated thusly: -

keyctl list @u

1 key in keyring:
526852507: --als--v  1000  1000 asymmetric: Test1

Nice!

Yay, VMware Fusion and macOS Big Sur - no longer "NAT good friends" - forgive the double negative and the terrible pun ...

After macOS 11 Big Sur was released in 2020, VMware updated their Fusion product to v12 and, sadly, managed to break Network Address Trans...