Anyone following my quest to migrate to Kubernetes has been reading about the thoughts and problems encountered about dealing with a cost-effective and valid storage back-end.
I ended up simply using SSHFS on a storage box provided by my cloud provider. SSHFS is an fuse network file system, living in user space of the kernel. Since multiple host must be able to access the file system, it has to be a network file system, as otherwise the hosts would lock each other out while writing to the file system. In a real production environment with needs for high availability and high performance, this is not suitable.

However I am running a test and development environment and have to look after my costs, so this will suffice for now. Upgrades can always be done down the road.
All you have to do really is to install the sshfs package, the name might differ depending on the OS. Then you will need to add an entry in your /etc/fstab like this:
#sshfs share
:share /data/share fuse.sshfs comment=sshfs,defaults,transform_symlinks,identityfile=/root/.ssh/id_rsa,users,exec,auto,allow_other,_netdev,uid=1000,gid=1000,reconnect 0 0
Then you mount it at least once, so it can login to the SSHFS share. For the login to work, it needs to create a fingerprint for the SSH key.
Afterwards you need to create specific storage class in Kubernetes:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sshfs-storage-class
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Then you have to refer to that storage class when creating you persistent volume (pv) and persistent volume claims (pvc).
Well that really is it so far.
Happy coding!