Backup
backup 3-2-1
manual
# archive at remote server
tar -zcvf /home/"$HOSTNAME-$(date '+%Y-%m-%d-T%H-%M-%S').tar.gz" /home/ubuntu/docker
#download from a remote server
scp -i ~/.ssh/id_ed25519_serge ubuntu@oci01-arm01:/home/oci01-arm01-2024-02-29-T02-58-47.tar.gz ~
# archive
ssh -i ~/.ssh/id_ed25519_serge ubuntu@oci01-arm01 'sudo tar -zcvf /home/"$HOSTNAME-$(date '+%Y-%m-%d-T%H-%M-%S').tar.gz" /home/ubuntu/docker'
# rsync
rsync -Pav --include="*/" --include="*.gz" --exclude="*" -e "ssh -i ~/.ssh/id_ed25519_serge" ubuntu@oci01-arm01:/home/ ~
nas
volume backup
Volumes are being backed up on a daily basis
hyper backup
- vmpool1
- vmpool2
- vmpool3
dsm proxy.homelab.lan
dsm -> Control Panel -> Task Scheduler :
- mount-sshd-to-nas : mount-sshfs-to-nas.sh - backup-to-nas : backup-to-nas.shmount
# /volume3/homelab/backup/mount-sshfs-to-nas.sh
#!/bin/bash
sudo sshfs -o IdentityFile=/volume1/homes/jdedev/.ssh/id_ed25519_serge ubuntu@10.0.5.111:/ /volume1/mount/proxy.homelab.lan
backup
#!/bin/bash
tar -zcvf /volume3/homelab/backup/proxy.homelab.lan/"$HOSTNAME-$(date '+%Y-%m-%d-T%H-%M-%S').tar.gz" /volume1/mount/proxy.homelab.lan/home/ubuntu/homelab_host_proxy
rsync
Install
Configure /etc/rsyncd.conf
[backup]
path = /hdd8tb/backup/
comment = Backup
readonly = false
timeout = 300
auth users = jdedev
secrets file = /etc/rsyncd.secrets
[photo]
path = /hdd8tb/photo/
comment = Backup
readonly = false
timeout = 300
auth users = jdedev
secrets file = /etc/rsyncd.secrets
[photo_review]
path = /hdd8tb/photo_review/
comment = Backup
readonly = false
timeout = 300
auth users = jdedev
secrets file = /etc/rsyncd.secrets
[cloud]
path = /hdd8tb/cloud/
comment = Backup
readonly = false
timeout = 300
auth users = jdedev
secrets file = /etc/rsyncd.secrets
[doc]
path = /hdd8tb/doc/
comment = Backup
readonly = false
timeout = 300
auth users = jdedev
secrets file = /etc/rsyncd.secrets
credentials nano /etc/rsyncd.secrets
Enable service
Start service
oracle oci
open port 3333 in vm firewall
# test listen
nc -l 3333
# test connect
nc -zv your-instance-ip 3333
# nc -zv 100.86.183.8 3333
# Connection to 100.86.183.8 3333 port [tcp/*] succeeded!
# Attach the block volume to an instance (paravirtualized recommended) but don't mount it; unmount it if it's already mounted, for safety reasons
# If you have enough free space on your boot volume:
dd if=/dev/sdb | gzip -9 > /export.gz
# Then you can copy the compressed image somewhere else with rsync or whatever
# restoring could be done like this:
zcat export.gz | dd of=/dev/sdc
# If you don't have enough space on your boot volume you should probably use netcat to transfer the file over the network to somewhere that does (such as your home PC) # for example run this on your instance:
dd if=/dev/sdb | gzip -9 | nc -l 3333
# then run this on your home PC:
nc 10.10.10.10 3333 | pv -b > export.gz
dd if=/dev/sdb | gzip -9 | nc -l 3333
nc 100.107.80.108 3333 | pv -b > export.gz
gunzip -c export.gz | dd of=/dev/sdb
fileserver (hl3)
Install cockpit
remove root from cockpit disallowed-usersCockpit Modules
#Download Cockpit File Sharing wget https://github.com/45Drives/cockpit-file-sharing/releases/download/v3.3.4/cockpit-file-sharing_3.3.4-1focal_all.deb #Download Cockpit Navitator wget https://github.com/45Drives/cockpit-navigator/releases/download/v0.5.10/cockpit-navigator_0.5.10-1focal_all.deb #Download Cockpit Identities wget https://github.com/45Drives/cockpit-identities/releases/download/v0.1.12/cockpit-identities_0.1.12-1focal_all.deb #Install them apt install ./*.deb #It will complain about being unable to delete the deb files, so we will do that now rm ./*.deb
proxmox backup server
Proxmox Backup Server is used to backup the whole Proxmox cluster (h1, hl2, hl3, hl4) to zfs dataset at hl3 (5 TB HDD) with deduplication. All vm's take ~ 225 GB (initial backup) as of 29/02/2024. LXC container is used for pbs runtime.
pbs storage is mounted inside lxc container to /data/pbs dataset
mp2: /data/pbs,mp=/mnt/pbs,replicate=0
pbx lxc config
hl3 - 103103 : 10.0.3.103
root (identity) backupuser (password)
pbs config
zfs
snaphosts
Sanoid
- Install as per instructions below
- https://github.com/jimsalterjrs/sanoid/blob/master/INSTALL.md
- Create systemd file(s) and reload daemons. Ubuntu didn't create these files.
- https://github.com/jimsalterjrs/sanoid/blob/master/INSTALL.md#centosrhel
- Create configuration file
/etc/sanoid/sanoid.conf- Run
sanoid --cron- Verify snapshots
Create configuration file /etc/sanoid/sanoid.conf
# /etc/sanoid/sanoid.conf
[data/vmpool]
use_template = production
recursive = yes
[template_production]
frequently = 0
hourly = 48
daily = 14
monthly = 0
yearly = 0
autosnap = yes
autoprune = yes
Verify snapshots
zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
vmpool@autosnap_2024-03-07_11:48:32_daily 0B - 96K -
vmpool@autosnap_2024-03-07_11:48:32_hourly 0B - 96K -
vmpool/subvol-102-disk-0@autosnap_2024-03-07_11:48:32_daily 0B - 1.14G -
vmpool/subvol-102-disk-0@autosnap_2024-03-07_11:48:32_hourly 0B - 1.14G -
cloud : onedrive
family photo
Microsoft 365 Family subscription : sergii.kurian@gmail.com
