<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[/dev/stderr]]></title><description><![CDATA[Linux, networking, storage, and AWS - tips, how-to, guides, and tutorials]]></description><link>http://www.devstderr.com/</link><generator>Ghost 5.2</generator><lastBuildDate>Tue, 21 Apr 2026 04:13:56 GMT</lastBuildDate><atom:link href="http://www.devstderr.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[ZFS Listing Snapshots]]></title><description><![CDATA[<p></p><h2 id="zfs-listing-pools">ZFS Listing Pools </h2><p>ZFS has been amazing - the ability to snapshot, rollback, clone and create incremental updates is amazing!</p><p>When you&apos;re using ZFS, you may want to look at and sort your snapshots. But first, you may need to know what pools you have <code>zpool list</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://www.devstderr.com/content/images/2023/08/image.png" class="kg-image" alt loading="lazy" width="770" height="71" srcset="http://www.devstderr.com/content/images/size/w600/2023/08/image.png 600w, http://www.devstderr.com/content/images/2023/08/image.png 770w" sizes="(min-width: 720px) 720px"><figcaption>zpool</figcaption></figure>]]></description><link>http://www.devstderr.com/zfs-list-snapshots/</link><guid isPermaLink="false">64ca78441eb7d0041fbbe758</guid><dc:creator><![CDATA[root]]></dc:creator><pubDate>Wed, 02 Aug 2023 15:50:23 GMT</pubDate><content:encoded><![CDATA[<p></p><h2 id="zfs-listing-pools">ZFS Listing Pools </h2><p>ZFS has been amazing - the ability to snapshot, rollback, clone and create incremental updates is amazing!</p><p>When you&apos;re using ZFS, you may want to look at and sort your snapshots. But first, you may need to know what pools you have <code>zpool list</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://www.devstderr.com/content/images/2023/08/image.png" class="kg-image" alt loading="lazy" width="770" height="71" srcset="http://www.devstderr.com/content/images/size/w600/2023/08/image.png 600w, http://www.devstderr.com/content/images/2023/08/image.png 770w" sizes="(min-width: 720px) 720px"><figcaption>zpool list</figcaption></figure><h2 id="zfs-listing-datasets">ZFS Listing Datasets</h2><p>Once you know the pool you want see the <strong>datasets</strong> - well that&apos;s just <code>zfs list rpool</code> (or bpool, but if you&apos;re using ZFS on Ubuntu 22.04 you probably are more interested in rpool).</p><h2 id="zfs-listing-snapshots">ZFS Listing Snapshots</h2><p>ZFS allows you to select what entity types you&apos;d like returned with the <code>-t</code> flag. By passing <code>-t snapshot</code> you list ZFS snapshots.</p><p>For example, <code>zfs list &#xA0;-t snapshot -r rpool</code></p><p>Further, you can sort the snapshots by, say, name with <code>zfs list &#xA0;-t snapshot -s name -r rpool</code></p><p>If you have tens of thousands of snapshots (trust me, it happens) you might want to limit the fields that are returned (for instance, if you have hundreds of thousands of snapshots it might take minutes for your OS to calculate the available and used space of all snapshots.</p><p>So - you can speed up listing the snapshots by only returning the name of the snapshot (with the <code>-o name</code> option).</p><p><code>zfs list &#xA0;-t snapshot -o name -s name -r rpool </code></p><h2 id="find-the-dataset-with-the-most-snapshots">Find the dataset with the most snapshots</h2><p>If you want to find datasets with the most snapshots, see my post here:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="http://www.devstderr.com/zfs-finding-datasets-with-the-most-snapshots"><div class="kg-bookmark-content"><div class="kg-bookmark-title">ZFS Finding Datasets with the most snapshots</div><div class="kg-bookmark-description">Following up on Removing old ZFS snapshots, I had way too many snapshots... These commands will find all snapshots within the pool tank and count them by dataset.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="http://www.devstderr.com/content/images/size/w256h256/2022/02/2022-02-26-1645921864_screenshot_287x287-3.png" alt><span class="kg-bookmark-author">/dev/stderr</span><span class="kg-bookmark-publisher">root</span></div></div><div class="kg-bookmark-thumbnail"><img src="http://www.devstderr.com/content/images/2022/06/2022-06-07-1654603245_screenshot_1142x654.jpg" alt></div></a></figure>]]></content:encoded></item><item><title><![CDATA[ARCH VM resize root partition]]></title><description><![CDATA[<p>First of all - <strong>back up your data!!!</strong> While we hope not to destroy your data, a single typo could nuke the VM. Of if there&apos;s an error in the instructions, etc. While this worked great for me several times, I can provide no guarantees that this will</p>]]></description><link>http://www.devstderr.com/arch-vm-resize/</link><guid isPermaLink="false">63d414a21eb7d0041fbbe5ee</guid><dc:creator><![CDATA[root]]></dc:creator><pubDate>Tue, 07 Feb 2023 13:47:00 GMT</pubDate><content:encoded><![CDATA[<p>First of all - <strong>back up your data!!!</strong> While we hope not to destroy your data, a single typo could nuke the VM. Of if there&apos;s an error in the instructions, etc. While this worked great for me several times, I can provide no guarantees that this will work nor that it will preserve your data, use at your own risk.</p><p>The three main steps are</p><ol><li>Resize the disk in the hypervisor</li><li>Update the partition table to take up the full disk</li><li>Increase the size of the filesystem to take up the full partition</li></ol><p></p><h2 id="resize-the-disk-in-the-hypervisor">Resize the disk in the hypervisor</h2><p>First, you need to increase the size of the disk in whatever virtualization software you&apos;re using.</p><p>In Proxmox, select the VM, go to the Hardware tab, select the drive and hit &quot;Disk Action.&quot; Then hit &quot;Resize&quot; and enter the number of GB you&apos;d like to add to the disk. Note, you can only increase the size, not decrease it, so don&apos;t go too crazy! I just need a little room so I&apos;m adding 4GB.</p><figure class="kg-card kg-image-card"><img src="http://www.devstderr.com/content/images/2023/02/image.png" class="kg-image" alt loading="lazy" width="526" height="145"></figure><h2 id="update-the-partition-table">Update the partition table</h2><figure class="kg-card kg-code-card"><pre><code>lsblk
df -h</code></pre><figcaption>Find the disk and partition you need to resize (probably your root partition)</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>fdisk /dev/sda</code></pre><figcaption>open the disk with fdisk</figcaption></figure><p>You may see a warning like this: </p><p><em>This disk is currently in use - repartitioning is probably a bad idea.<br>It&apos;s recommended to umount all file systems, and swapoff all swap<br>partitions on this disk.</em></p><p>Again, continue at your own risk, and only if your data is backed up.</p><p>&gt;# Command (m for help): <code>p</code> <sub># select p for partitions to see your partitions - again make SURE you know what partition you want to resize</sub></p><blockquote><em>Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors<br>Disk model: QEMU HARDDISK<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disklabel type: gpt<br>Disk identifier: 7BB1AE60-B6B0-6743-6C49-1ACCAD9BF33C</em></blockquote><blockquote><em>Device &#xA0; &#xA0; &#xA0; Start &#xA0; &#xA0; &#xA0;End &#xA0;Sectors &#xA0;Size Type<br>/dev/sda1 &#xA0; &#xA0; 2048 &#xA0;1050623 &#xA0;1048576 &#xA0;512M BIOS boot<br>/dev/sda2 &#xA0;1050624 12580863 11530240 &#xA0;5.5G Linux filesystem</em></blockquote><p>&gt;# Command (m for help): <code>d</code> <sub># select d for delete</sub></p><p>&gt;# Partition number (1,2, default 2): <code>2</code> <sub># select 2 to delete partition 2</sub></p><p>&gt;# Partition 2 has been deleted.</p><p>&gt;# Command (m for help): <code>n</code> <sub># select n to create a new partition</sub></p><p>&gt;# Partition number (2-128, default 2): <code>2</code> <sub># 2 to number the new partition 2</sub></p><p>&gt;# First sector (1050624-20971486, default 1050624):<br>&gt;# Last sector, +/-sectors or +/-size{K,M,G,T,P} (1050624-20971486, default 20969471): ENTER <sub>just hit the enter key to use the default</sub></p><p>&gt;# Created a new partition 2 of type &apos;Linux filesystem&apos; and of size 9.5 GiB.<br>&gt;# Partition #2 contains a ext4 signature.</p><p>&gt;# Do you want to remove the signature? [Y]es/[N]o: <code>n</code> <sub>n to leave ext4 signature as is - we don&apos;t want to delete anything, just make it larger!</sub></p><p>&gt;# Command (m for help): <code>p</code> <sub># select p for partitions to see your partitions again - make sure the starting location of the second partition is where it was when you first ran <code>p</code> and that the first partition is unchanged</sub> &#xA0; </p><blockquote><em>Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors<br>Disk model: QEMU HARDDISK<br>Units: sectors of 1 * 512 = 512 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disklabel type: gpt<br>Disk identifier: 7BB1AE60-B6B0-6743-6C49-1ACCAD9BF33C</em></blockquote><blockquote><em>Device &#xA0; &#xA0; &#xA0; Start &#xA0; &#xA0; &#xA0;End &#xA0;Sectors &#xA0;Size Type<br>/dev/sda1 &#xA0; &#xA0; 2048 &#xA0;1050623 &#xA0;1048576 &#xA0;512M BIOS boot<br>/dev/sda2 &#xA0;1050624 20969471 19918848 &#xA0;9.5G Linux filesystem</em></blockquote><p><sub>And only when your SURE &#xA0;things line up</sub></p><p>&gt;# Command (m for help): <code>w</code> <sub># select w to write the new partition table</sub> </p><p>&gt;# Command (m for help): <code>d</code> <sub># select d for delete</sub></p><figure class="kg-card kg-code-card"><pre><code>lsblk</code></pre><figcaption>look at the partitions again to make sure the size has increased</figcaption></figure><h2 id="increase-the-size-of-the-filesystem-to-take-up-the-full-partition">Increase the size of the filesystem to take up the full partition</h2><figure class="kg-card kg-code-card"><pre><code>resize2fs /dev/sda2</code></pre><figcaption>resize the filesystem to take up the full partition</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>df -h</code></pre><figcaption>you should now see lots of space remaining (if you don&apos;t you may need to reboot the VM)</figcaption></figure><p></p><p>A lot of this information came from AskUbuntu, but was modified slightly for Arch Linux. Please see below for the Ubuntu directions:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://askubuntu.com/questions/24027/how-can-i-resize-an-ext-root-partition-at-runtime"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How can I resize an ext root partition at runtime?</div><div class="kg-bookmark-description">How can I increase the size of the root partition of a system at runtime? I have a partition that is not allocated after the root partition (which is also ext4), how can I add that unallocated sp...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.sstatic.net/Sites/askubuntu/Img/apple-touch-icon.png?v=e16e1315edd6" alt><span class="kg-bookmark-author">Ask Ubuntu</span><span class="kg-bookmark-publisher">BonboBingo</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.sstatic.net/Sites/askubuntu/Img/apple-touch-icon@2.png?v=c492c9229955" alt></div></a></figure><p></p>]]></content:encoded></item><item><title><![CDATA[Truenas Keybindings]]></title><description><![CDATA[<p>Being used to Ubuntu keybindings, I&apos;m used to navigating the a line in the terminal with <code>CTRL+LEFT</code> and <code>CTRL+RIGHT</code> - but instead of the cursor moving left or right by one word at a time I see <code>;5D;5D;3D;3D</code> appear in where my cursor</p>]]></description><link>http://www.devstderr.com/truenas-keybindings/</link><guid isPermaLink="false">63e243fc1eb7d0041fbbe602</guid><dc:creator><![CDATA[root]]></dc:creator><pubDate>Tue, 07 Feb 2023 13:02:55 GMT</pubDate><content:encoded><![CDATA[<p>Being used to Ubuntu keybindings, I&apos;m used to navigating the a line in the terminal with <code>CTRL+LEFT</code> and <code>CTRL+RIGHT</code> - but instead of the cursor moving left or right by one word at a time I see <code>;5D;5D;3D;3D</code> appear in where my cursor is! And when I try to use the delete key, I just see <code>~</code> appear! This is clearly not what I wanted to type! <code>/mnt/nas;3D;3D/home</code></p><p>Luckily, we can activate the keybindings on FreeBSD/Truenas that we&apos;re used to on other platforms.</p><p>Just add the keybindings to <code>~/.inputrc</code> and then source it in <code>~/.zshrc</code>! </p><figure class="kg-card kg-code-card"><pre><code>setopt interactivecomments;

case &quot;${TERM}&quot; in
  cons25*|linux) # plain BSD/Linux console
    bindkey &apos;\e[H&apos;    beginning-of-line   # home \
    bindkey &apos;\e[F&apos;    end-of-line         # end  \
    bindkey &apos;\e[5~&apos;   delete-char         # delete\
    bindkey &apos;[D&apos;      emacs-backward-word # esc left\
    bindkey &apos;[C&apos;      emacs-forward-word  # esc right\
    ;;
  *rxvt*) # rxvt derivatives
    bindkey &apos;\e[3~&apos;   delete-char         # delete\
    bindkey &apos;\eOc&apos;    forward-word        # ctrl right\
    bindkey &apos;\eOd&apos;    backward-word       # ctrl left\
    # workaround for screen + urxvt\
    bindkey &apos;\e[7~&apos;   beginning-of-line   # home\
    bindkey &apos;\e[8~&apos;   end-of-line         # end\
    bindkey &apos;^[[1~&apos;   beginning-of-line   # home\
    bindkey &apos;^[[4~&apos;   end-of-line         # end\
    ;;
  *xterm*) # xterm derivatives
    bindkey &apos;\e[H&apos;    beginning-of-line   # home\
    bindkey &apos;\e[F&apos;    end-of-line         # end\
    bindkey &apos;\e[3~&apos;   delete-char         # delete\
    bindkey &apos;\e[3;5~&apos; delete-word         # delete-word\
    bindkey &apos;\e[1;5C&apos; forward-word        # ctrl right\
    bindkey &apos;\e[1;5D&apos; backward-word       # ctrl left\
    # workaround for screen + xterm\
    bindkey &apos;\e[1~&apos;   beginning-of-line   # home\
    bindkey &apos;\e[4~&apos;   end-of-line         # end\
    ;;
  screen)
    bindkey &apos;^[[1~&apos;   beginning-of-line   # home\
    bindkey &apos;^[[4~&apos;   end-of-line         # end\
    bindkey &apos;\e[3~&apos;   delete-char         # delete\
    bindkey &apos;\eOc&apos;    forward-word        # ctrl right\
    bindkey &apos;\eOd&apos;    backward-word       # ctrl left\
    bindkey &apos;^[[1;5C&apos; forward-word        # ctrl right\
    bindkey &apos;^[[1;5D&apos; backward-word       # ctrl left\
    ;;
esac</code></pre><figcaption>nano ~/.inputrc</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>echo &apos;source ~/.inputrc&apos; &gt;&gt; ~/.zshrc</code></pre><figcaption>load the keybindings when you log in</figcaption></figure><p>Or - if you want to activate them immediately </p><figure class="kg-card kg-code-card"><pre><code>source ~/.inputrc</code></pre><figcaption>activate keybindings immediately</figcaption></figure><p></p><p>Kudos to vermaden at Server Fault for creating the list! Please upvote him if you find his answer helpful (I only found it and wanted to make it easier to find for those using Truenas)!</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://serverfault.com/questions/386871/getting-5d-when-hitting-ctrl-arrow-key-in-a-terminal-on-freebsd"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Getting ;5D when hitting ctrl + arrow key in a Terminal on FreeBSD</div><div class="kg-bookmark-description">On centos I can skip a word by hitting ctrl + arrow (left or right) in a terminal. When I ssh into a FreeBSD box and I try the same pattern I get: $ tail -f 20120412.log;5D;5D;5D (each try = ;5D...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.sstatic.net/Sites/serverfault/Img/apple-touch-icon.png?v=6c3100d858bb" alt><span class="kg-bookmark-author">Server Fault</span><span class="kg-bookmark-publisher">jdorfman</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.sstatic.net/Sites/serverfault/Img/apple-touch-icon@2.png?v=9b1f48ae296b" alt></div></a></figure><p></p>]]></content:encoded></item><item><title><![CDATA[Mac VM disk resize]]></title><description><![CDATA[<p></p><p>I have a MacOS VM running on Proxmox installed by using the instructions from Nicholas Sherlock</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.nicksherlock.com/2021/10/installing-macos-12-monterey-on-proxmox-7/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Installing macOS 12 &#x201C;Monterey&#x201D; on Proxmox 7 &#x2013; Nicholas Sherlock</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.nicksherlock.com/favicon.ico" alt><span class="kg-bookmark-author">Nicholas Sherlock Menu and widgets</span><span class="kg-bookmark-publisher">Nicholas Sherlock</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.nicksherlock.com/wp-content/uploads/2021/10/Screen-Shot-2021-10-26-at-11.02.06-AM.png" alt></div></a></figure><p></p><p>I ran out of disk space for the VM and needed to add more. </p><p></p><ol><li>Shutdown</li></ol>]]></description><link>http://www.devstderr.com/mac-vm-disk-resize/</link><guid isPermaLink="false">638638441eb7d0041fbbe4fd</guid><category><![CDATA[proxmox]]></category><category><![CDATA[MacOS]]></category><category><![CDATA[vm]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Fri, 02 Dec 2022 00:59:52 GMT</pubDate><content:encoded><![CDATA[<p></p><p>I have a MacOS VM running on Proxmox installed by using the instructions from Nicholas Sherlock</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.nicksherlock.com/2021/10/installing-macos-12-monterey-on-proxmox-7/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Installing macOS 12 &#x201C;Monterey&#x201D; on Proxmox 7 &#x2013; Nicholas Sherlock</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.nicksherlock.com/favicon.ico" alt><span class="kg-bookmark-author">Nicholas Sherlock Menu and widgets</span><span class="kg-bookmark-publisher">Nicholas Sherlock</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.nicksherlock.com/wp-content/uploads/2021/10/Screen-Shot-2021-10-26-at-11.02.06-AM.png" alt></div></a></figure><p></p><p>I ran out of disk space for the VM and needed to add more. </p><p></p><ol><li>Shutdown the MacOS VM</li><li>Increase disk size in the Proxmox GUI</li><li>Start the MacOS instance and connect to it</li><li>Open Disk Utility</li><li>Run &quot;First Aid&quot; on the disk</li><li>Open the partitioning tool</li><li>Type in the disk size you want (dragging didn&apos;t seem to work correctly)<br>The &quot;unused disk&quot; fraction (grey) may be WRONG - but that&apos;s okay. As the resize runs MacOS makes sure the new disk size will work.</li><li>Boom! You should now be all set with your larger disk!</li></ol><p></p>]]></content:encoded></item><item><title><![CDATA[LXC Won't Start Debugging - (NFS disconnect)]]></title><description><![CDATA[<p></p><p>On Proxmox I recent updated a debian container - and when I rebooted it it wouldn&apos;t start!</p><p>In the browser I could see an error but no details.</p><p>So I tried launching from the browser as </p><pre><code>pct start 103</code></pre><p>Only to receive the error</p><pre><code>Failed to run lxc.</code></pre>]]></description><link>http://www.devstderr.com/lxc-wont-start-debugging/</link><guid isPermaLink="false">637630141eb7d0041fbbe3f8</guid><category><![CDATA[proxmox]]></category><category><![CDATA[nfs]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Fri, 02 Dec 2022 00:56:18 GMT</pubDate><content:encoded><![CDATA[<p></p><p>On Proxmox I recent updated a debian container - and when I rebooted it it wouldn&apos;t start!</p><p>In the browser I could see an error but no details.</p><p>So I tried launching from the browser as </p><pre><code>pct start 103</code></pre><p>Only to receive the error</p><pre><code>Failed to run lxc.hook.pre-start for container &quot;103&quot;</code></pre><p>But when I ran the verbose (debug version) of pct start</p><figure class="kg-card kg-code-card"><pre><code>pct start 103 --debug 1</code></pre><figcaption>Running in debug mode helped me figure out the issue!</figcaption></figure><p>I could see that a mount point didn&apos;t exist!</p><p></p><pre><code>DEBUG    conf - ../src/lxc/conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: directory &apos;/mnt/pve/MyMountPoint&apos; does not exist</code></pre><p>running in debug more </p>]]></content:encoded></item><item><title><![CDATA[Proxmox Arch VM install]]></title><description><![CDATA[<p>For security I wanted to expose a VM on Proxmox to the web rather than a container. Ubuntu is rather heavy, so I settled on an Arch Linux VM. </p><p></p><h2 id="easiest-way-to-install-archa-prebuilt-qcow2-file">Easiest way to install Arch - a prebuilt qcow2 file!</h2><p>This takes a just a couple minutes and requires minimal thought</p>]]></description><link>http://www.devstderr.com/arch-vm-install/</link><guid isPermaLink="false">6382ae131eb7d0041fbbe41b</guid><category><![CDATA[arch]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[vm]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Fri, 02 Dec 2022 00:31:41 GMT</pubDate><content:encoded><![CDATA[<p>For security I wanted to expose a VM on Proxmox to the web rather than a container. Ubuntu is rather heavy, so I settled on an Arch Linux VM. </p><p></p><h2 id="easiest-way-to-install-archa-prebuilt-qcow2-file">Easiest way to install Arch - a prebuilt qcow2 file!</h2><p>This takes a just a couple minutes and requires minimal thought and effort... but the hard drive will be 40GB. If that&apos;s a dealbreaker see below on how to install your own instance of Arch.</p><ol><li>Download the basic qcow2 image from <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/-/jobs/106936/artifacts/browse/output">https://gitlab.archlinux.org/archlinux/arch-boxes/-/jobs/106936/artifacts/browse/output</a> to your Proxmox server <br>e.g., <code>wget <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/-/jobs/106936/artifacts/raw/output/Arch-Linux-x86_64-basic-20221201.106936.qcow2?inline=false">https://gitlab.archlinux.org/archlinux/arch-boxes/-/jobs/106936/artifacts/raw/output/Arch-Linux-x86_64-basic-20221201.106936.qcow2?inline=false</a></code></li><li>In the PRoxmox GUS Click <code>Create: VM</code></li><li>On the OS Selection page choose &quot;Do not use any media&quot;</li><li>Continue the VM creation flow</li><li>Once complete, go to the Hardware tab and select the hard drive - remove it!</li><li>From the terminal run (where 134 is the appropriate instance id and local-zfs the location you want the virtual drive stored).<br><code>qm importdisk 134 Arch-Linux-x86_64-basic-20221105.99990.qcow2 local-zfs</code></li><li>Start the VM and log in with the username/password arch/arch</li></ol><p></p><h3 id="harder-but-more-flexible-install-your-own-instance-of-arch">Harder but more flexible: Install your own instance of Arch</h3><p>Download the Arch installable iso</p><ol><li><code>Create: VM</code></li><li>Select a disk size of whatever you want (but at the very least <code>2</code> GiB - I&apos;d suggest something like <code>6</code> GiB so you have some flexibility if you don&apos;t expect to need a lot of space.</li><li>Select at least <code>750</code> MB ram (while 512M should work, I receive an error &quot;Waiting 30 seconds for device....device did not show up...&quot;</li><li>Select the Arch iso as an attached disk</li><li>Once the bootable iso boots run:<br><code>cfdisk /dev/sda</code><br>then run the following commands (note we&apos;re installing a Bootable </li></ol><figure class="kg-card kg-code-card"><pre><code>
&gt; dos
&gt; hit &quot;Enter&quot; on free space
&gt; hit new &gt; (continue using the remainder of the disk) 
&gt; primary
&gt; left arrow to Bootable
&gt; Enter on Bootable (should now see an asterisk in the table under &quot;Boot&quot;)
&gt; write &gt; yes &gt; quit</code></pre><figcaption>Partitioning the disk</figcaption></figure><p>6. to confirm, you can run <code>lsblk</code> to see your partitions</p><figure class="kg-card kg-code-card"><pre><code>mkfs.ext4 /dev/sda1
mount /dev/sda1 /mnt</code></pre><figcaption>Format and mount the new partition</figcaption></figure><p>7. update package lists</p><figure class="kg-card kg-code-card"><pre><code>pacman -Syy</code></pre><figcaption>Update package lists</figcaption></figure><p>8. Install needed packages to your new virtual hard drive</p><figure class="kg-card kg-code-card"><pre><code>pacstrap /mnt base linux linux-firmware sudo nano openssh networkmanager grub os-prober mtools</code></pre><figcaption>Install arch on new partition</figcaption></figure><figure class="kg-card kg-code-card"><pre><code># OPTIONAL: if you run into an error about invalid keys then make sure you have up to date keys
pacman -Sy archlinux-keyring

pacstrap /mnt base linux linux-firmware sudo nano openssh networkmanager grub os-prober mtools</code></pre><figcaption>OPTIONAL (Only if you run into an issue with keys)</figcaption></figure><p>9. Create the fstab</p><figure class="kg-card kg-code-card"><pre><code>genfstab -U /mnt &gt;&gt; /mnt/etc/fstab</code></pre><figcaption>Create fstab</figcaption></figure><p>10. Change root to /mnt so you can configure your new system</p><figure class="kg-card kg-code-card"><pre><code>arch-chroot /mnt</code></pre><figcaption>Treat your new partition as root for install</figcaption></figure><p>11. Configure and install grub (again, this is WITHIN chroot session)</p><figure class="kg-card kg-code-card"><pre><code>grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg</code></pre><figcaption>Configure and install grub</figcaption></figure><p>12. Set the root password and create your own user (if you want)</p><figure class="kg-card kg-code-card"><pre><code># set your own root password
passwd

#create a new user and set it&apos;s password
useradd -m -G wheel myuser
passwd myuser</code></pre><figcaption>Set root password and create a new user</figcaption></figure><p>13. Enable NetworkManager so &#xA0;you have internet access once you reboot</p><figure class="kg-card kg-code-card"><pre><code>systemctl enable NetworkManager</code></pre><figcaption>exit and then reboot!&#xA0;</figcaption></figure><p>13. Exit chroot and shutdown</p><figure class="kg-card kg-code-card"><pre><code>exit

shutdown</code></pre><figcaption>exit and then reboot!&#xA0;</figcaption></figure><p>14. Remove the arch install CD (iso) and start up the new VM!</p><p>After install you might be able to lower memory to even 300MB!</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Recover from OPNsense GUI lockout]]></title><description><![CDATA[<p>Somehow a gateway change locked me out of my OPNsense GUI. I still have the Anti-Lockout Rule enabled in the firewall so this shouldn&apos;t have been possible, but alas...</p><p>While I didn&apos;t have GUI access, I could still SSH. It turns out, OPNsense&apos;s main</p>]]></description><link>http://www.devstderr.com/recover-from-opnsense-gui-lockout/</link><guid isPermaLink="false">63418c5ff115d203f6fbb5f8</guid><category><![CDATA[OPNsense]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Thu, 01 Dec 2022 21:02:41 GMT</pubDate><content:encoded><![CDATA[<p>Somehow a gateway change locked me out of my OPNsense GUI. I still have the Anti-Lockout Rule enabled in the firewall so this shouldn&apos;t have been possible, but alas...</p><p>While I didn&apos;t have GUI access, I could still SSH. It turns out, OPNsense&apos;s main configuration file is <em>/conf/config.xml</em>. </p><p>So, first I backed up the configuration <code>cp /conf/config.xml /conf/config.xml.bak</code>.</p><p>Then I opened it in vi to find my recent changes (I had enabled a route and intervace named OpenVPN) <code>vi /conf/config.xml</code>.</p><p>After reversing my edits I saved the file <code>:wq</code> and ran <code>reboot</code> - I was back in!</p>]]></content:encoded></item><item><title><![CDATA[OpenVPN client in LXC]]></title><description><![CDATA[<p>I followed the directions in <a href="https://pve.proxmox.com/wiki/OpenVPN_in_LXC">https://pve.proxmox.com/wiki/OpenVPN_in_LXC</a> but on any network load OpenVPN causes the connection to cut out... </p><p></p><p>I had much better luck running a pfSense VM with OpenVPN installed on it and setting that pfSense instance as the gateway for my VM.</p>]]></description><link>http://www.devstderr.com/openvpn-in-lxc/</link><guid isPermaLink="false">62d1f85ff115d203f6fbb52a</guid><category><![CDATA[proxmox]]></category><category><![CDATA[openvpn]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Fri, 15 Jul 2022 23:32:33 GMT</pubDate><content:encoded><![CDATA[<p>I followed the directions in <a href="https://pve.proxmox.com/wiki/OpenVPN_in_LXC">https://pve.proxmox.com/wiki/OpenVPN_in_LXC</a> but on any network load OpenVPN causes the connection to cut out... </p><p></p><p>I had much better luck running a pfSense VM with OpenVPN installed on it and setting that pfSense instance as the gateway for my VM. &#xA0;Pretty extreme, but it was the best I could get... Please share links or suggestions if you have any!</p>]]></content:encoded></item><item><title><![CDATA[Expand rpool Partition in Proxmox]]></title><description><![CDATA[<p></p><p>My main Proxmox hard drive was filling up recently...</p><figure class="kg-card kg-code-card"><pre><code>
NAME    SIZE  ALLOC FREE CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH
rpool   451G   396G 55G        -         -    32%    89%  1.00x    ONLINE</code></pre><figcaption><code>zpool list</code></figcaption></figure><p>but I realized the partition was only half the size of the disk! &#xA0;(Proxmox was originally installed</p>]]></description><link>http://www.devstderr.com/expand-rpool-partition-in-proxmox/</link><guid isPermaLink="false">62d1f296f115d203f6fbb47b</guid><category><![CDATA[proxmox]]></category><category><![CDATA[zfs]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Fri, 15 Jul 2022 23:28:36 GMT</pubDate><content:encoded><![CDATA[<p></p><p>My main Proxmox hard drive was filling up recently...</p><figure class="kg-card kg-code-card"><pre><code>
NAME    SIZE  ALLOC FREE CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH
rpool   451G   396G 55G        -         -    32%    89%  1.00x    ONLINE</code></pre><figcaption><code>zpool list</code></figcaption></figure><p>but I realized the partition was only half the size of the disk! &#xA0;(Proxmox was originally installed on a smaller disk and copied to it&apos;s current disk with dd.)</p><p>So I pulled up the drive partitions:</p><figure class="kg-card kg-code-card"><pre><code>Disk /dev/sdh: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt

Device       Start       End   Sectors   Size Type
/dev/sdh1       34      2047      2014  1007K BIOS boot
/dev/sdh2     2048   1050623   1048576   512M EFI System
/dev/sdh3  1050624 937703054 936652431 446.6G Solaris /usr &amp; Apple ZFS
</code></pre><figcaption>fdisk -l /dev/sdh</figcaption></figure><p>Disk: <code>/dev/sdh</code></p><p>Partition I wanted to expand <code>/dev/sdh3</code></p><h2 id="for-spoiler-on-the-one-line-resize-scroll-to-the-end">[For spoiler on the one line resize scroll to the end.]</h2><p>I started to go through the fdisk flow of resizing the partition but paniced once I realized I had to delete the partition, re-write it, and then saw messages like <code>Created a new partition 3 of type &apos;Linux Filesystem&apos; and of size 884.4 GiB.</code> and <code>Partition #3 contains a zfs_member signature</code>. I wasn&apos;t sure if the partiton was right or not!</p><figure class="kg-card kg-code-card"><pre><code>Device       Start        End    Sectors   Size Type
/dev/sdh1       34       2047       2014  1007K BIOS boot
/dev/sdh2     2048    1050623    1048576   512M EFI System
/dev/sdh3  1050624 1855717376 1854666753 884.4G Linux filesystem</code></pre><figcaption>when I displayed my in memory partition table with <code>p</code> the type had changed from &quot;Solaris /usr &amp; Apple ZFS&quot; to &quot;Linux filesystem&quot;!</figcaption></figure><p>I quickly quit out of fdisk without rewriting the partition. I had remembered something easier from Ubuntu and finally found it! resizepart! </p><h1 id="simple-1-line-partition-resize-to-100">Simple 1 line partition resize to 100%:</h1><p>I simply ran <code>parted /dev/sdh resizepart 3 100%</code> and the partition grew to 100% like magic! (Obviously use your appropriate disk letter/partition number.)</p><p>I made sure the pool would expand</p><figure class="kg-card kg-code-card"><pre><code>zpool set autoexpand=on rpool
zpool online -e rpool /dev/sdh3</code></pre><figcaption>make sure the pool expands</figcaption></figure><p>And confirmed it did by rerunning zpool list!</p><figure class="kg-card kg-code-card"><pre><code>NAME  SIZE  ALLOC FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH 
rpool 900G   396G 504G        -         -    32%    44%  1.00x    ONLINE 

</code></pre><figcaption><code>zpool list</code></figcaption></figure><p>Note: the proxmox gui didn&apos;t recognize the resize until reboot, but as long as zpool did I was confident things were good!</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[ZFS Finding Datasets with the most snapshots]]></title><description><![CDATA[Following up on Removing old ZFS snapshots, I had way too many snapshots... These commands will find all snapshots within the pool tank and count them by dataset.]]></description><link>http://www.devstderr.com/zfs-finding-datasets-with-the-most-snapshots/</link><guid isPermaLink="false">62b89c9cb22f500417ff15ca</guid><category><![CDATA[zfs]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Mon, 27 Jun 2022 08:46:54 GMT</pubDate><content:encoded><![CDATA[<p>Following up on <a href="http://www.devstderr.com/removing-old-zfs-snapshots/">Removing old ZFS snapshots</a>, I had way too many snapshots and while I knew some datasets that had too many, I wasn&apos;t clear how to find datasets with way to many snapshots that I had to delete.</p><p>Simply running <code>zfs list &#xA0;-t snapshot -o name -s name -r nas</code> resulted in tens of thousands of results I couldn&apos;t scroll through. And I wanted to be cautious where I ran <em>zfs destroy</em> on snapshots!</p><p>So I decided to create a series of commands that let me count the number of snapshots each dataset has and sort them!</p><p>These commands will find all snapshots within the pool <code>tank</code> and count them by dataset</p><p><code>zfs list &#xA0;-t snapshot -o name -s name -r tank | sed &apos;s/@.*//&apos; | sort | uniq -c | sort</code></p><p></p><p>While those are sorted in ascending order, that works nice for me in the terminal. If you prefer just the top datasets in descending order just reverse the final sort and pipe to head!</p><p><code>zfs list &#xA0;-t snapshot -o name -s name -r tank | sed &apos;s/@.*//&apos; | sort | uniq -c | sort -r | head</code> </p><figure class="kg-card kg-code-card"><pre><code> 23161 tank/dataset1
 11353 tank/dataset2 
 1126 tank/backup/ubuntu_g1f5ds/var/lib/230948234aef887o90897f987e987e
 1073 tank/dataset4
 953 tank/dataset5</code></pre><figcaption><em><code>zfs list -t snapshot -o name -s name -r tank | sed &apos;s/@.*//&apos; | sort | uniq -c | sort -r | head</code>&#xA0;</em></figcaption></figure><p></p><p><em>Warning, this should not be piped to <code>zfs destroy</code>, <code>xargs</code>, or anything like that. Here <code>sed</code> removes the snapshot name leaving just the dataset name.</em></p>]]></content:encoded></item><item><title><![CDATA[Wireguard returns "RTNETLINK answers: No such device" error -> low MTU]]></title><description><![CDATA[<p></p><p>I use a Wireguard VPN to tunnel my traffic when on public wifi - but I couldn&apos;t use Wireguard from a recent network! &#xA0;This specific network was causing Wireguard to fail with the error message <strong><em>&quot;RTNETLINK answers: No such device.&quot;</em></strong> &#xA0;</p><p>I hadn&apos;t</p>]]></description><link>http://www.devstderr.com/wireguard-rtnetlink/</link><guid isPermaLink="false">62ab105112fbf208f09a81ab</guid><category><![CDATA[wireguard]]></category><category><![CDATA[vpn]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Thu, 16 Jun 2022 11:26:05 GMT</pubDate><content:encoded><![CDATA[<p></p><p>I use a Wireguard VPN to tunnel my traffic when on public wifi - but I couldn&apos;t use Wireguard from a recent network! &#xA0;This specific network was causing Wireguard to fail with the error message <strong><em>&quot;RTNETLINK answers: No such device.&quot;</em></strong> &#xA0;</p><p>I hadn&apos;t changed any configurations, so I was very confused why a device was missing!</p><p>I &#xA0;found the answer on the <a href="https://lore.kernel.org/wireguard/20190321033638.1ff82682@natsu/t/">WireGuard mailing list</a>, IPv6 requires MTU of at least 1280 - and when I ran <code>ip a</code> I could see the public wifi only allowed up to 1270!</p><h3 id="the-fix-for-rtnetlink-answers-no-such-device">The fix for &quot;RTNETLINK answers: No such device&quot;:</h3><p>Disable IPv6 altogether (to prevent a leak if you do somehow switch back to IPv6</p><pre><code>sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1; sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1; sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1</code></pre><p>And update the AllowedIPs Wireguard config to exclude IPv6. </p><figure class="kg-card kg-code-card"><pre><code>AllowedIPs = 0.0.0.0/0</code></pre><figcaption>After removing ` , ::/0`</figcaption></figure><p>And you should be able to connect!</p><h3 id="reminder">Reminder</h3><p>If you need to re-enable IPv6 remember to revert the changes to your config file! &#xA0;The disabling of IPv6 DOES NOT persist between reboots. (You can disable IPv6 permanantly in /etc/sysctl.conf if needed.)</p>]]></content:encoded></item><item><title><![CDATA[Removing old ZFS snapshots]]></title><description><![CDATA[<p></p><p>I back up multiple computers to my Truenas instance hourly, but these snapshots hang around without an easy way to prune them (computers I back up include <a href="http://www.devstderr.com/ubuntu-syncoid/">Ubuntu</a> and <a href="http://www.devstderr.com/backup-proxmox-syncoid/">Proxmox</a>).</p><p></p><p>But over a couple years one of my datasets seemed to grow snapshots faster than bunny rabbits reproduce to over</p>]]></description><link>http://www.devstderr.com/removing-old-zfs-snapshots/</link><guid isPermaLink="false">62a5096bfebcd6594f6a70c3</guid><category><![CDATA[zfs]]></category><category><![CDATA[freenas]]></category><category><![CDATA[truenas]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Sat, 11 Jun 2022 22:12:41 GMT</pubDate><content:encoded><![CDATA[<p></p><p>I back up multiple computers to my Truenas instance hourly, but these snapshots hang around without an easy way to prune them (computers I back up include <a href="http://www.devstderr.com/ubuntu-syncoid/">Ubuntu</a> and <a href="http://www.devstderr.com/backup-proxmox-syncoid/">Proxmox</a>).</p><p></p><p>But over a couple years one of my datasets seemed to grow snapshots faster than bunny rabbits reproduce to over 400k snapshots!</p><p><code>zfs list -t snapshot -o name -s name -r <code>tank/UBUNTU_DATASET_NAME</code> | wc -l</code></p><p>Not only that, but the above command took several minutes!!! (Taking minutes to list the snapshots was actually how I knew something was wrong.)</p><p>Sanoid and syncoid are great, but I was sending too many snapshots over without cleaning them up!</p><h3 id="how-to-clean-up-snapshots">How to clean up snapshots</h3><p>If you do this be VERY careful that you know what you&apos;re deleting. Step 2 is included to make sure we actually look at what we&apos;re deleting - <strong>don&apos;t skip it!</strong></p><p>I am not responsible for any commands you run. These commands are not suggestions, just a list of what I ran.</p><p>For the below, I&apos;m assuming my pool is named <code>tank</code> and the dataset I want to clean up is called <code><code>UBUNTU_DATASET_NAME</code></code>.</p><ol><li>I needed to run these as root so I switched to root</li></ol><p><code>sudo su</code></p><p>2. Find the snapshots I need to delete</p><p><code>zfs list -t snapshot -o name -s name -r tank/UBUNTU_DATASET_NAME | grep &apos;@&apos; &#xA0;| grep _hourly</code></p><ul><li><code><code>zfs list -t snapshot -o name -s name -r tank/UBUNTU_DATASET_NAME</code></code> recursively lists the snapshots</li><li><code>grep &apos;@&apos;</code> is my safety check to make sure I&apos;m listing snapshots</li><li><code>grep _hourly</code> makes sure I&apos;m only deleting the hourly snapshots</li></ul><p> 3. Pipe those to <code>xargs -n1 zfs destroy</code></p><p><code>zfs list -t snapshot -o name -s name -r tank/UBUNTU_DATASET_NAME | grep &apos;@&apos; &#xA0;| xargs -n1 zfs destroy</code> </p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Proxmox - Clone VM/Container with NFS/CIFS mount]]></title><description><![CDATA[<p></p><p>I have a NAS with storage I need to access from a VM. Proxmox requires this to be mounted directed in Proxmox and passed to the VM/Container as a mount point.</p><p>This worked great until... I wanted to clone my VM. Well, it turns out Proxmox can&apos;t</p>]]></description><link>http://www.devstderr.com/proxmox-clone-error-mount/</link><guid isPermaLink="false">629e7bb6febcd6594f6a7060</guid><category><![CDATA[proxmox]]></category><category><![CDATA[cifs]]></category><category><![CDATA[nfs]]></category><category><![CDATA[vm]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Tue, 07 Jun 2022 11:35:00 GMT</pubDate><media:content url="http://www.devstderr.com/content/images/2022/06/Proxmox-Gateway.png" medium="image"/><content:encoded><![CDATA[<img src="http://www.devstderr.com/content/images/2022/06/Proxmox-Gateway.png" alt="Proxmox - Clone VM/Container with NFS/CIFS mount"><p></p><p>I have a NAS with storage I need to access from a VM. Proxmox requires this to be mounted directed in Proxmox and passed to the VM/Container as a mount point.</p><p>This worked great until... I wanted to clone my VM. Well, it turns out Proxmox can&apos;t because of the bind mount point returning</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://www.devstderr.com/content/images/2022/06/image-1.png" class="kg-image" alt="Proxmox - Clone VM/Container with NFS/CIFS mount" loading="lazy" width="354" height="125"><figcaption><code>&quot;unable to clone mountpoint &apos;mp1&apos; (type bind) (500)&quot;</code></figcaption></figure><p></p><p>Saddened, I tried making a snapshot instead - hoping I could then spin up a new VM from that snapshot. But... </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://www.devstderr.com/content/images/2022/06/image-2.png" class="kg-image" alt="Proxmox - Clone VM/Container with NFS/CIFS mount" loading="lazy" width="871" height="161" srcset="http://www.devstderr.com/content/images/size/w600/2022/06/image-2.png 600w, http://www.devstderr.com/content/images/2022/06/image-2.png 871w" sizes="(min-width: 720px) 720px"><figcaption><code>&quot;The current guest configuration does not support taking new snapshots&quot;</code></figcaption></figure><p></p><p>The set up:</p><p>On Proxmox I had run <code>mount 192.168.1.9:/mnt/nas/Share/proxmox /mnt/pve/nas-Share-proxmox -O async</code></p><p>with standard mounts</p><figure class="kg-card kg-image-card"><img src="http://www.devstderr.com/content/images/2022/06/image-4.png" class="kg-image" alt="Proxmox - Clone VM/Container with NFS/CIFS mount" loading="lazy" width="600" height="153" srcset="http://www.devstderr.com/content/images/2022/06/image-4.png 600w"></figure><p>(note, CIFS can be mounted like in this <a href="http://www.devstderr.com/proxmox-mount-samba/">article</a>)</p><p></p><p>It turns out all my frustration stemmed from this &quot;Advanced&quot; option - &quot;Skip replication.&quot; By enabling &quot;Skip replication&quot; I was able to clone and take Proxmox snapshots of containers and VM&apos;s with NFS mounts.</p><p></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://www.devstderr.com/content/images/2022/06/image.png" class="kg-image" alt="Proxmox - Clone VM/Container with NFS/CIFS mount" loading="lazy" width="600" height="255" srcset="http://www.devstderr.com/content/images/2022/06/image.png 600w"><figcaption>Check &quot;Skip Replication&quot;</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Backup Proxmox (single node, but including VM's!) with sanoid/syncoid on a schedule]]></title><description><![CDATA[<p></p><p>First of all, I only use a one node Proxmox server meaning all my data is in a single pool. If you use multiple nodes this is probably more complex...</p><p>I already <a href="http://www.devstderr.com/ghost/#/editor/post/621ad53c2b1ddd082c89a166/">backup my Ubuntu computer</a> running ZFS using sanoid/syncoid so the most efficient way for me to back</p>]]></description><link>http://www.devstderr.com/backup-proxmox-syncoid/</link><guid isPermaLink="false">629e7756febcd6594f6a7019</guid><category><![CDATA[proxmox]]></category><category><![CDATA[sanoid/syncoid]]></category><category><![CDATA[backup]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Tue, 07 Jun 2022 11:33:00 GMT</pubDate><media:content url="http://www.devstderr.com/content/images/2022/06/Proxmox-Gateway-1.png" medium="image"/><content:encoded><![CDATA[<img src="http://www.devstderr.com/content/images/2022/06/Proxmox-Gateway-1.png" alt="Backup Proxmox (single node, but including VM&apos;s!) with sanoid/syncoid on a schedule"><p></p><p>First of all, I only use a one node Proxmox server meaning all my data is in a single pool. If you use multiple nodes this is probably more complex...</p><p>I already <a href="http://www.devstderr.com/ghost/#/editor/post/621ad53c2b1ddd082c89a166/">backup my Ubuntu computer</a> running ZFS using sanoid/syncoid so the most efficient way for me to back up Proxmox is just repeat that behavior.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/jimsalterjrs/sanoid"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - jimsalterjrs/sanoid: Policy-driven snapshot management and replication tools. Using ZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.) Primarily intended for Linux, but BSD use is supported and reasonably frequently tested.</div><div class="kg-bookmark-description">Policy-driven snapshot management and replication tools. Using ZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.) Primarily intended fo...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Backup Proxmox (single node, but including VM&apos;s!) with sanoid/syncoid on a schedule"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">jimsalterjrs</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/9d29f0c214a12072ac9606e9cee98f5ca3ade673d3c67ae3ef2087f2a8fda1b3/jimsalterjrs/sanoid" alt="Backup Proxmox (single node, but including VM&apos;s!) with sanoid/syncoid on a schedule"></div></a></figure><p>Follow the installation instructions from the Github Repo, and configure your backup schedule in /etc/sanoid/sanoid.conf. &#xA0;The config sets up how frequently snapshots run and how long they&apos;re preserved. For example, by default, hourly snapshots are only kept for 48 hours.</p><figure class="kg-card kg-code-card"><pre><code>[rpool]
        use_template = production
        recursive=yes

[template_production]
        frequently = 0
        hourly = 36
        daily = 30
        monthly = 3
        yearly = 0
        autosnap = yes
        autoprune = yes

</code></pre><figcaption>/etc/sanoid/sanoid.conf</figcaption></figure><p>Once configured, just schedule a cronjob with root to sync the backup to your backup server (e.g., TrueNAS/FreeNAS).</p><p>switch to root <code>sudo su</code> then run <code>crontab -e</code> and add the following line.</p><pre><code>5  * * * * /usr/sbin/syncoid --no-sync-snap --recursive  --no-privilege-elevation --sshkey=~/.ssh/key-nas  --exclude=swap rpool username@192.168.1.9:nas/LiveBackups/proxmox/rpool</code></pre><p></p><p>By syncing rpool recursively we back up the Proxmox installation as well as all VM&apos;s / containers stored on that pool. For a multiple node set up you&apos;d need to find a way to back up multiple pools from multiple machines. There might be a way from Proxmox snapshots but I&apos;m not sure the best way!</p>]]></content:encoded></item><item><title><![CDATA[systemd for persistent SOCKS proxy]]></title><description><![CDATA[<p></p><p>I&apos;d like one browser to connect through a VPN but my main internet connection (and other browsers) to go through my ISP. </p><p>I already have a VPN running on a VM on Proxmox and figured I could use it as a proxy.</p><p>But if the SSH connection dies</p>]]></description><link>http://www.devstderr.com/s/</link><guid isPermaLink="false">6294c4d4febcd6594f6a6f8b</guid><category><![CDATA[systemd]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[proxy]]></category><category><![CDATA[vpn]]></category><dc:creator><![CDATA[root]]></dc:creator><pubDate>Mon, 30 May 2022 15:12:41 GMT</pubDate><media:content url="http://www.devstderr.com/content/images/2022/05/4549847897_df28d7072f_c.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://www.devstderr.com/content/images/2022/05/4549847897_df28d7072f_c.jpg" alt="systemd for persistent SOCKS proxy"><p></p><p>I&apos;d like one browser to connect through a VPN but my main internet connection (and other browsers) to go through my ISP. </p><p>I already have a VPN running on a VM on Proxmox and figured I could use it as a proxy.</p><p>But if the SSH connection dies the proxy would disconnect so I need a way of keeping the SSH connection from dying! &#xA0;</p><p></p><p>Step 1: create /etc/systemd/system/anon-socks.service</p><figure class="kg-card kg-code-card"><pre><code>[Unit]
Description=SSH socks proxy
After=network-online.target

[Service]
User=MYUSER
ExecStart=/usr/bin/ssh -N -D 9876 -q -i /home/MYUSER/.ssh/vm_pvt_key MYEXTERNALUSER@10.5.0.21
ExecStop=kill $(pgrep -f &apos;ssh -N -D 9876 -q&apos;)
RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target
</code></pre><figcaption>/etc/systemd/system/anon-socks.service</figcaption></figure><p>Make sure to update MYUSER to your username on the local machine, update <code>-i /home/MYUSER/.ssh/vm_pvt_key MYEXTERNALUSER<a>@10.5.0.21</a></code> to the appropriate connection.</p><p>Restart=always is what restarts the service should it be interrupted for any reason (loss of connection, you suspend and resume your computer, etc.). This is what makes the proxy &quot;undying!&quot; </p><p></p><p>Step 2: reload</p><figure class="kg-card kg-code-card"><pre><code>sudo systemctl daemon-reload</code></pre><figcaption>Reload the unit files</figcaption></figure><p>Step 3: enable it</p><figure class="kg-card kg-code-card"><pre><code>sudo systemctl enable --now anon-socks.service</code></pre><figcaption>Enable it, starting it now</figcaption></figure><p>Step 4: connect to proxy</p><p>about:preferences#general &gt; Network Settings &gt; Settings &gt; Manual proxy configuration</p><figure class="kg-card kg-image-card"><img src="http://www.devstderr.com/content/images/2022/05/2022-05-30-1653917469_screenshot_741x491.jpg" class="kg-image" alt="systemd for persistent SOCKS proxy" loading="lazy" width="741" height="491" srcset="http://www.devstderr.com/content/images/size/w600/2022/05/2022-05-30-1653917469_screenshot_741x491.jpg 600w, http://www.devstderr.com/content/images/2022/05/2022-05-30-1653917469_screenshot_741x491.jpg 741w" sizes="(min-width: 720px) 720px"></figure><p></p><p>Step 5: well, you&apos;re proxied now</p><figure class="kg-card kg-image-card"><img src="http://www.devstderr.com/content/images/2022/05/image.png" class="kg-image" alt="systemd for persistent SOCKS proxy" loading="lazy" width="360" height="240"></figure><p>(assuming the VPN is running on the server - you should confirm you&apos;re IP has changed)</p>]]></content:encoded></item></channel></rss>