Nextcloud with podman rootless containers and user systemd services. Part IV - Exposing Nextcloud externally

Introducing bunkerized-nginx 🔗I heard about bunkerized-nginx a while ago and I thought it would be nice to use it as a reverse proxy so I can expose my internal services to the internet ‘safely’. A non-exhaustive list of features (copy & paste from the README): HTTPS support with transparent Let’s Encrypt automation State-of-the-art web security : HTTP security headers, prevent leaks, TLS hardening, … Integrated ModSecurity WAF with the OWASP Core Rule Set Automatic ban of strange behaviors with fail2ban Antibot challenge through cookie, javascript, captcha or recaptcha v3 Block TOR, proxies, bad user-agents, countries, … Block known bad IP with DNSBL and CrowdSec Prevent bruteforce attacks with rate limiting Detect bad files with ClamAV Easy to configure with environment variables or web UI Automatic configuration with container labels A must have for me was having support for Let’s Encrypt and having an easy way to configure it.

Nextcloud with podman rootless containers and user systemd services. Part V - Running Nextcloud as a pod with play kube

podman play kube 🔗One of the cool things about podman is that is not just a docker replacement, it can do so much more! The feature I’m talking about is being able to run Kubernetes YAML pod definitions! How cool is that? You can read more about this feature in the podman-play-kube man, but essentially, you just need a proper pod yaml definition and podman play kube /path/to/my/pod.yaml will run it for you.

Deploy Inspektor Gadget on OpenShift 4.6

Introduction 🔗Inspektor Gadget is a collection of tools (or gadgets) to debug and inspect Kubernetes applications. Inspektor Gadget is deployed to each node as a privileged DaemonSet. It uses in-kernel BPF helper programs to monitor events mainly related to syscalls from userspace programs in a pod. The BPF programs are run by the kernel and gather the log data. Inspektor Gadget’s userspace utilities fetch the log data from ring buffers and display it.

Deploy OpenShift Virtualization 2.5 on OCP 4.6.1 on baremetal IPI

Preparation 🔗Ensure your workers have the virtualization flag enabled: for node in $(oc get nodes -o name | grep kni1-worker); do oc debug ${node} -- grep -c -E 'vmx|svm' /host/proc/cpuinfo done That snippet should return the number of cpu cores with virtualization enabled (it should be all of them). Subscription 🔗cat <<EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.

Deploy OCS 4.5 on OCP 4.6.1 on baremetal IPI

Preparation 🔗Label the nodes you want to use for OCS, in my case: for node in $(oc get nodes -o name | grep kni1-worker); do oc label ${node} cluster.ocs.openshift.io/openshift-storage='' done Local storage operator 🔗Deploy the local storage operator cat <<EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: local-storage spec: {} --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: LocalVolume.v1.local.storage.openshift.io name: local-storage namespace: local-storage spec: targetNamespaces: - local-storage --- apiVersion: operators.

SCC assignments and permissions in OpenShift

SCCs 🔗There are tons of information out there about SCCs, but in this post we will be focused on how to create and use a custom SCC only. See the OpenShift official documentation on Managing Security Context Constraints for more details. Custom SCC 🔗In the event of requiring a custom SCC, there are a few steps that need to be done to be able to use the SCC properly. Minimal capabilities 🔗The best way to create a custom SCC would be to build it based on the most restricted one (hint: its name is restricted) and then start adding capabilities and permissions depending on the application requisites.

Manage external displays with Gnome and Argos extension

I wanted to easily switch between my regular desktop configuration: All the external displays: To a single external display: Or just the laptop screen: This usually required to open gnome-control-center, then click displays, etc. So I thought it would be nice to look for a extension in the Gnome Extensions site… but I couldn’t find any that worked as I wanted… so let’s try to do our own method! :)

Podman rootless containers in RHEL7

Quick howto to make podman rootless containers work in RHEL7: sudo yum clean all sudo yum update -y sudo yum install slirp4netns podman -y echo "user.max_user_namespaces=28633" | sudo tee -a /etc/sysctl.d/userns.conf sudo sysctl -p /etc/sysctl.d/userns.conf sudo usermod --add-subuids 200000-300000 --add-subgids 200000-300000 $(whoami) podman system migrate Then, logout and log-in again. Easy peasy!

Simulate ONTAP ® 9.6 on KVM + Trident 20.04 on OCP4

Introduction 🔗NetApp filers can be used to provide dynamic storage to OCP4/k8s using NetApp’s Trident storage orchestrator. In order to be able to use it, you need to have a real NetApp hardware to play with. It is also true that NetApp offers a simulator to play with. NOTE: The Simulator is not publicly available and you can only access to it if you are a customer or partner. It is required for you to have a proper NFS license.

Xiaomi Mijia Ble Sensor MQTT on containers on Kubernetes

Intro 🔗As I mentioned in my previous post, everything was working flawlessly… except for a bluetooth issue in my raspberry pi 3 that basically renders bluetooth unusuable… but it is rebooted daily via a cron job, so minor issue :) (I know I know, I’m planning to do a better workaround…) This was good enough, but a few days ago I decided to give k3sup a chance and install k3s (a lightweight Kubernetes distribution focused on ARM/IoT devices) in a spare pine64 that was gathering dust in a drawer :)