<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  
  
  <channel>
    <title>chrisjrob: cli</title>
    <link>https://chrisjrob.com</link>
    <atom:link href="https://chrisjrob.com/tag/cli/feed/index.xml" rel="self" type="application/rss+xml" />
    <description>GNU Linux, Perl and FLOSS</description>
    <language>en-gb</language>
    <pubDate>Fri, 13 Feb 2026 17:22:31 +0000</pubDate>
    <lastBuildDate>Fri, 13 Feb 2026 17:22:31 +0000</lastBuildDate>
    
    <item>
      <title>Upgrading Ubuntu 12.04 To 14.04 With Limited Bandwidth</title>
      <link>https://chrisjrob.com/2014/09/04/upgrading-ubuntu-12-04-to-14-04-with-limited-bandwidth/</link>
      <pubDate>Thu, 04 Sep 2014 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2014/09/04/upgrading-ubuntu-12-04-to-14-04-with-limited-bandwidth</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/ubuntu-1404-desktop.png" align="right" alt="Featured Image">
         
         <p>Upgrading Ubuntu at work can make you rather unpopular, as the Internet bandwidth
is fully utilised downloading all the updates to packages you have long
since forgotten that you installed.</p>

<p>It also takes time, time that you should be working rather
than upgrading your computer.</p>

<p>For these reasons I like to trickle download the upgrade over a day and
only perform the actual upgrade once all the packages are ready,
typically the following morning.</p>

<!--more-->

<p>This is how I performed my low-bandwidth upgrade…</p>

<p><strong>N.B. This is not the official or recommended way of upgrading between
Ubuntu versions. Specifically my method involves manually disabling some
repositories and updating others to the new release. This would normally
be done by the do-release-upgrade program itself. It works for me, but
please do be aware that you are deviating slightly from the
official method.</strong></p>

<p> </p>

<h2 id="step-1-disable-3rd-party-repositories">Step 1: Disable 3rd Party Repositories</h2>

<p>Launch the Ubuntu Software Centre and from the menu select <strong>Edit</strong>
followed by <strong>Software Sources</strong>. Under the <strong>Other Software</strong> tab
please untick all active repositories.</p>

<p>(This step should in any case be done automatically by step 4).</p>

<h2 id="step-2-update-repositories">Step 2: Update Repositories</h2>

<p>Edit <code class="language-plaintext highlighter-rouge">/etc/apt/sources.list</code> and replace all occurrences of  ”precise”
with “trusty”. If you are of a brave disposition, the following command
should do this for you:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo sed -i.bak 's/precise/trusty/g' sources.list
</code></pre></div></div>

<p>(This will create a copy of sources.list to sources.list.bak, in case
you wish to reverse this.)</p>

<h2 id="step-3-download-packages">Step 3: Download Packages</h2>

<p>Still in the terminal, type:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get updatesudo apt-get -o Acquire::http::Dl-Limit=64 -d dist-upgrade
</code></pre></div></div>

<p>The 64 will limit the bandwidth to 64 Kbps, please adjust to suit your
available bandwidth. The “-d” will instruct apt-get to merely download
the packages and not to install them.</p>

<p>I believe this stage can be aborted with Ctrl+C at any time and run
again, until such time as all the required packages are downloaded.</p>

<h2 id="step-4-upgrade">Step 4: Upgrade</h2>

<p>Still in the terminal, I tend to use GNU Screen for extra resilience,
type:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo do-release-upgrade
</code></pre></div></div>

<h2 id="conclusion">Conclusion</h2>

<p>I am typing this on my newly upgraded 14.04 installation, after a clean
and trouble-free reboot and an entirely fault-free upgrade.</p>

<p>The truly astonishing aspect to an upgrade is the fact that the computer
remains largely usable throughout. I lost my fonts briefly in one
application during Step 4, but otherwise I was able to work normally. It
didn’t even seem to be slowing my computer down greatly, although this
is a fairly powerful workhorse, so your mileage may vary.</p>

<p>Please do comment, if you feel I’ve left anything out in the above, or
indeed if you have found it useful.</p>

<p>Good luck with your upgrade.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>PDFTK The PDF Toolkit</title>
      <link>https://chrisjrob.com/2014/03/24/pdftk-the-pdf-toolkit-2/</link>
      <pubDate>Mon, 24 Mar 2014 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2014/03/24/pdftk-the-pdf-toolkit-2</guid>
      <description>
       <![CDATA[
         
         <p><a href="http://www.pdflabs.com/docs/pdftk-cli-examples/" title="PDFTK - The PDF Toolkit">PDFTK - The PDF
Toolkit</a></p>

<p>I have long been a keen user of pdftk, the PDF Toolkit, but am
frequently surprised when people have not heard of it. True, it is a
command line tool, but it is easy to incorporate into service menus,
scripts etc and doubtless there is a GUI front-end for it somewhere (in
fact there is one linked to from the above page).</p>

<!--more-->

<p>Clearly a blog post is called for, but, whilst you wait for a post that
will never arrive, <a href="http://www.pdflabs.com/docs/pdftk-cli-examples/" title="PDFTK - The PDF Toolkit">here is a
link</a>
to some examples that should open your eyes to what is possible with
pdftk.</p>

<p>To get started on a Debian-based system:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get install pdftk
$ man pdftk
</code></pre></div></div>

<p>Enjoy.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Crontab Header</title>
      <link>https://chrisjrob.com/2014/01/04/crontab-header/</link>
      <pubDate>Sat, 04 Jan 2014 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2014/01/04/crontab-header</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/crontab.png" align="right" alt="Featured Image">
         
         <p>Very early on in my Linux life, I came across this suggested header for
crontab and I’ve used it ever since. So much so that I am always
slightly thrown when I come across a crontab without it! No, you don’t
need it, yes the standard commented header works just fine, but, if like
me you prefer things neatly lined up, then this might suit you:</p>

<!--more-->

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MAILTO=
#   _________________________ 2. Minute - Minutes after the hour (0-59)
#  |
#  |      ______________________ 2. Hour - 24-hour format (0-23).
#  |     |
#  |     |      ___________________ 3. Day - Day of the month (1-31)
#  |     |     |
#  |     |     |      ________________ 4. Month - Month of the year (1-12)
#  |     |     |     |
#  |     |     |     |     ______________5. Weekday - Day of the week. (0-6, 0 indicates Sunday)
#  |     |     |     |    |
#__|_____|_____|_____|____|___Command_____________________________________________________________
</code></pre></div></div>

<p>And if you recognise this as your’s, then thank you!</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>How to Make Umount Work With sshfs</title>
      <link>https://chrisjrob.com/2013/11/18/howto-make-umount-work-with-sshfs-unicom-systems-development/</link>
      <pubDate>Mon, 18 Nov 2013 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2013/11/18/howto-make-umount-work-with-sshfs-unicom-systems-development</guid>
      <description>
       <![CDATA[
         
         <p>Having created an sshfs mount in <code class="language-plaintext highlighter-rouge">/etc/fstab</code>, I was frustrated that it
would mount okay, but unmounting always resulted in an error “mount
disagrees with the fstab”. The following solution worked for me:</p>

<!--more-->

<p>For more information please visit the following link:</p>

<ul>
  <li><a href="http://www.unicom.com/blog/entry/651">HowTo: Make umount Work with sshfs | Unicom Systems
Development</a></li>
</ul>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Linux Terminal Command Reference</title>
      <link>https://chrisjrob.com/2013/09/28/linux-terminal-command-reference/</link>
      <pubDate>Sat, 28 Sep 2013 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2013/09/28/linux-terminal-command-reference</guid>
      <description>
       <![CDATA[
         
         <p>I thought that this <a href="http://community.linuxmint.com/tutorial/view/244" title="Linux Terminal Command Reference">Linux Terminal Command
Reference</a> from
the Mint community was excellent. Having learned them piecemeal over
many years, I was almost resentful to see them all listed together.
Linux shouldn’t be easy, it should be knowledge painfully acquired
through years of humiliation on IRC channels and mailing lists!</p>

<!--more-->

<p>For more information please visit:</p>

<ul>
  <li><a href="http://community.linuxmint.com/tutorial/view/244&quot;">Linux Terminal Command Reference</a></li>
</ul>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Convert XPS to PDF</title>
      <link>https://chrisjrob.com/2013/03/12/convert-xps-to-pdf/</link>
      <pubDate>Tue, 12 Mar 2013 10:47:53 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2013/03/12/convert-xps-to-pdf</guid>
      <description>
       <![CDATA[
         
         <h2 id="introduction">Introduction</h2>

<p>XPS is Microsoft’s attempt to replace PDF, the only difference is that everyone can read PDFs, and not everyone can read XPS.  I understand that KDE 4 versions of Okular will support XPS, which may make these instructions unnecessary, although having a tool for conversion readily at hand is always useful!</p>

<!--more-->

<p>These instructions were tested in Debian Lenny.  These instructions worked for our specific systems YMMV.</p>

<h2 id="building-from-source">Building from source</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get install libxext-dev libxt-dev
$ wget http://ghostscript.com/releases/ghostpdl-8.71.tar.bz2
$ tar xvvjf ghostpdl-8.71.tar.bz2
$ cd ghostpdl-8.71
$ make xps
</code></pre></div></div>

<h2 id="testing">Testing</h2>

<p>After the build you will find gxps in xps/obj</p>

<p>To test, you will need a test document in XPS format.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd xps/obj
$ ./gxps -sDEVICE=pdfwrite -sOutputFile=test.pdf -dNOPAUSE test.xps
</code></pre></div></div>

<h2 id="move-to-bin">Move to bin</h2>

<p>You probably want to move the gxps executable into a convenient location within your PATH.  /usr/local/bin may be a good destination.  Once there you ought to be able to run the command from anywhere and it just work.  Not sure what your PATH is?  Type “echo $PATH” in your terminal.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ echo $PATH
$ sudo cp xps/obj/gxps /usr/local/bin/
$ sudo chown root:root /usr/local/bin/gxps
</code></pre></div></div>

<h2 id="creating-file-type">Creating file type</h2>

<p>XPS probably does not exist on your Linux system as a file type, you can either create yourself using KDE Control Centre, or in KDE:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Single user:
~/.kde/share/mimelnk/application/xps.desktop

All users:
/usr/share/mimelnk/application/xps.desktop

[Desktop Entry]
Comment=XPS Document
Hidden=false
Icon=application-xps
MimeType=application/xps
Patterns=*.xps;*.XPS
Type=MimeType
X-KDE-AutoEmbed=false
</code></pre></div></div>

<h2 id="adding-to-servicemenu">Adding to ServiceMenu</h2>

<p>If you are using Konqueror, you can add a service menu (to enable right-click / action menu).</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Single user:
~/.kde/share/apps/konqueror/servicemenus/xpstopdf.desktop

All users:
/usr/share/apps/konqueror/servicemenus/xpstopdf.desktop

[Desktop Entry]
Version=1.0
Encoding=UTF-8
Name=xpstopdf service menu
ServiceTypes=application/xps
Icon=acroread
Actions=xpstopdf

[Desktop Action xpstopdf]
Icon=acroread
Name=Convert XPS to PDF
Exec=cd "%d"; gxps -sDEVICE=pdfwrite -sOutputFile="`echo "%f" | cut -d . -f 1`.pdf" -dNOPAUSE "%f"; mv "%f" ~/.local/share/Trash/files; kdialog --title "Convert XPS to PDF" --passivepopup "Done" 3; echo;
</code></pre></div></div>

<h2 id="testing-servicemenu">Testing ServiceMenu</h2>

<p>You should now be able to right-click on the file and “Convert XPS to PDF”.  This will create a PDF of the same name and move the XPS into trash.</p>

<h2 id="references">References</h2>

<ul>
  <li><a href="http://www.ghostscript.com/GhostPCL.html">http://www.ghostscript.com/GhostPCL.html</a></li>
  <li><a href="http://obscured.info/2010/03/01/converting-xps-to-pdf-on-ubuntu-9-10/">http://obscured.info/2010/03/01/converting-xps-to-pdf-on-ubuntu-9-10/</a></li>
  <li><a href="http://blog.rubypdf.com/2009/04/14/convert-xps-to-pdf-in-two-ways/">http://blog.rubypdf.com/2009/04/14/convert-xps-to-pdf-in-two-ways/</a></li>
</ul>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Default Applications Launched From Terminal</title>
      <link>https://chrisjrob.com/2013/03/05/default-applications-launched-from-terminal/</link>
      <pubDate>Tue, 05 Mar 2013 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2013/03/05/default-applications-launched-from-terminal</guid>
      <description>
       <![CDATA[
         
         <p>For some time it has irritated me that launching URLs from my terminal
would always launch Iceweasel/Firefox, rather than my default browser
Chromium. If you’re running KDE or Gnome, then I accept that this would
be governed from somewhere in the desktop environment’s control panel or
settings, but I run <a href="http://www.pekwm.org" title="PekWM">PekWM</a>, and assumed
that setting the default browser in update-alternatives should be
enough:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># update-alternatives --config x-www-browser
</code></pre></div></div>

<!--more-->

<p>Unfortunately of course many of the applications that I am using are
native to KDE or Gnome and probably are still respecting their
environment’s settings. In the end it was simply a case of editing:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/.local/share/applications/defaults.list
</code></pre></div></div>

<p>And adding the following lines:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>x-scheme-handler/http=chromium.desktop
x-scheme-handler/https=chromium.desktop
</code></pre></div></div>

<p>Now opening links from my terminal is correctly opening a new tab in
Chromium, or running Chromium if it isn’t already.</p>

<p>Joy.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>GPS On Linux</title>
      <link>https://chrisjrob.com/2012/09/17/gps-on-linux/</link>
      <pubDate>Mon, 17 Sep 2012 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2012/09/17/gps-on-linux</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/gps-bu-353.jpg" align="right" alt="Featured Image">
         
         <p>I have bought myself a <a href="http://www.amazon.co.uk/gp/product/B000PKX2KA/ref=as_li_ss_il?ie=UTF8&amp;camp=1634&amp;creative=19450&amp;creativeASIN=B000PKX2KA&amp;linkCode=as2&amp;tag=robsquadnet-21">GPS Receiver BU-353</a>.
Having plugged in the device into my Debian Wheezy workstation, I wanted
to test that it was working.</p>

<!--more-->

<p>A quick dmesg | tail showed me that the device has been found and
installed correctly (no drivers required).</p>

<p>I then installed the GPS daemon:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get install gpsd gpsd-clients
$ sudo dpkg-reconfigure gpsd
</code></pre></div></div>

<p>This then started the GPS daemon. The next thing to do was get some
example output, and the tool for this is gpspipe:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gpspipe -w -n 5
</code></pre></div></div>

<p>Lastly, I thought it would be fun to plot the output onto Google Maps
and/or Openstreetmaps:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tpv=$(gpspipe -w -n 5 | grep -m 1 TPV | cut -d, -f4,6-8,13)
$ latitude=$(echo $tpv | cut -d, -f3 | cut -d: -f2)
$ longitude=$(echo $tpv | cut -d, -f4 | cut -d: -f2)
$ google_map_url="http://maps.google.com/?q=${longitude},${latitude}&amp;z=${zoom}"
$ osm_map_url="http://www.openstreetmap.org/?mlat=${latitude}&amp;mlon=${longitude}&amp;zoom=${zoom}&amp;layers=M"
$ xdg-open $google_map_url
$ xdg-open $osm_map_url
</code></pre></div></div>

<p>All worked beautifully.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>How To Scan to OCR From The Command Line</title>
      <link>https://chrisjrob.com/2011/10/24/how-to-scan-to-ocr-from-the-command-line/</link>
      <pubDate>Mon, 24 Oct 2011 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2011/10/24/how-to-scan-to-ocr-from-the-command-line</guid>
      <description>
       <![CDATA[
         
         <p>I just had to remind myself how to scan to OCR, and thought I would
share the results.</p>

<p>Before you start, you need to have sane installed, and you also need
tesseract-ocr - both should be available in your distros repositories.</p>

<!--more-->

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get install sane-utils tesseract-ocr
</code></pre></div></div>

<p>Next you need to find out what scanners you have available, and you do
this with:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ scanimage -L
device `v4l:/dev/video0' is a Noname Vimicro USB Camera (Altair) virtual device
device `plustek:libusb:004:002' is a Epson Perfection 1250/Photo flatbed scanner)
</code></pre></div></div>

<p>Obviously the latter is my scanner.</p>

<p>Assuming you have a working scanner, the following is a simple two liner
to scan and OCR.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ scanimage -d 'plustek:libusb:004:002' --mode Lineart \
--format tiff -x 215 -y 297 --resolution 200 &gt; example.tif
</code></pre></div></div>

<p>And finally convert to text with tesseract:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tesseract /tmp/example.tif example
</code></pre></div></div>

<p>You should now have a file example.txt in your current directory, which
you can open in any text editor.</p>

<p>Obviously this has limitations - it works for single-page A4 portrait
typed documents - but it gives you the basics.</p>

<p>You could probably experiment with the resolution, 200 worked for me,
so I didn’t bother trying anything else.  Traditionally the higher the 
resolution the better, but I seem to recall that tesseract works better
on 300 and below.</p>

<p>On my Epson Perfection 1250 I found that I needed to add the sane 
switch <code class="language-plaintext highlighter-rouge">--warmup-time 0</code> as otherwise it never finished warming up.</p>

<p>If you would prefer to OCR an existing PDF, which is another thing that
I find myself doing from time to time, then first convert it to a tif:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ convert -density 200 example.pdf -depth 8 /tmp/example.tif
</code></pre></div></div>

<p>And then run the above tesseract command.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Debian Package NCDU</title>
      <link>https://chrisjrob.com/2011/05/09/debian-package-ncdu/</link>
      <pubDate>Mon, 09 May 2011 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2011/05/09/debian-package-ncdu</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/debian_logo.png" align="right" alt="Featured Image">
         
         <p>We all know that feeling when our disk fills up and you are left
desperately scrabbling around to find out where your disk space has
gone. In 
<a href="/2011/02/24/analyse-disk-usage-with-konqueror/" title="Analyse disk usage with Konqueror">a previous blog post I discussed the use of the wonderful Konqueror File Size View</a>,
but this is no good for remote servers. Normally I would resort to “du”
or the wonderful “find” utility to look for large files, but here is an
interesting alternative that I had not come across before: ncdu (ncurses
disk usage).</p>

<!--more-->

<p>Its name tells you pretty much everything you need to know. It can be
installed with a simple <code class="language-plaintext highlighter-rouge">apt-get install ncdu</code> and then the man page is
a useful guide. In simple terms it can just be run with:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># ncdu /var
</code></pre></div></div>

<p>The lovely thing about ncdu is that once it completes its run (which can
take a long time on a large disk or a nfs share), you can drill into the
directory structure following the disk usage to determine where your
space has gone.</p>

<p>It is a very simple program but one that I will find most useful.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Limiting The Bandwidth Usage of apt-get and wget</title>
      <link>https://chrisjrob.com/2011/03/31/limiting-the-bandwidth-usage-of-apt-get-and-wget/</link>
      <pubDate>Thu, 31 Mar 2011 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2011/03/31/limiting-the-bandwidth-usage-of-apt-get-and-wget</guid>
      <description>
       <![CDATA[
         
         <p>I have to be careful about the bandwidth I use at work; so I limit the
bandwidth of apt-get and wget.</p>

<h2 id="apt-get">apt-get</h2>

<p>For apt-get you just need to create a new file:</p>

<!--more-->

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/etc/apt/apt.conf.d/76download
</code></pre></div></div>

<p>with the following contents:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo vim /etc/apt/apt.conf.d/76download 
Acquire {
    Queue-mode "access";
    http {
        Dl-Limit "128";
    };
};
$
</code></pre></div></div>

<p>The above will limit your bandwidth to 128K, adjust this figure to suit
your network.</p>

<p>Alternatively, if you don’t want this change to be set permanently, then
you can specify it in the command line:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt-get -o Acquire::http::Dl-Limit=128 upgrade
</code></pre></div></div>

<h2 id="wget">wget</h2>

<p>To rate-limit wget, simply edit <code class="language-plaintext highlighter-rouge">/etc/wgetrc</code> or your personal
configuration at <code class="language-plaintext highlighter-rouge">~/.wgetrc</code> and add or edit the following line:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>limit-rate=128k
</code></pre></div></div>

<h2 id="other">other</h2>

<p>Other packages can be configured in different ways, but you could
install <code class="language-plaintext highlighter-rouge">trickle</code> and then read its man page to determine how to use it.
For example (from the man page):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ trickle -u 128 -d 128 ncftp
</code></pre></div></div>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Listing A Packages Dependencies With apt-rdepends</title>
      <link>https://chrisjrob.com/2011/03/17/listing-a-packages-dependencies-with-apt-rdepends/</link>
      <pubDate>Thu, 17 Mar 2011 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2011/03/17/listing-a-packages-dependencies-with-apt-rdepends</guid>
      <description>
       <![CDATA[
         
         <p>I sometimes find myself wondering what a package’s dependencies are.
This question is usually quickly satisfied with a
<code class="language-plaintext highlighter-rouge">$ sudo apt-get install packagename</code> and then aborting, or perhaps more
elegantly <code class="language-plaintext highlighter-rouge">$ sudo apt-get -s install packagename</code> to simulate the
installation.</p>

<!--more-->

<p>This doesn’t give you the entire picture, as it only lists the
dependencies that you don’t already have; which is usually all you care
about, but there are occasions when you would like to list all of a
package’s dependencies, for example when planning for a system that is
not built yet, or not accessible at the current time. Or just for idle
curiosity! Perhaps that’s just me.</p>

<p>Anyhow, <code class="language-plaintext highlighter-rouge">apt-rdepends</code> is the application for the job. It doesn’t just
list the package’s dependencies, but it recursively goes through each
dependency’s dependencies.</p>

<p>Install it with the usual <code class="language-plaintext highlighter-rouge">$ sudo apt-get install apt-rdepends</code> and then
simply run with:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ apt-rdepends packagename | less
</code></pre></div></div>

<p>Yes, it is quite verbose, hence the “| less” - leave it out if you
prefer, or use “| more” which is more likely to be installed on your
system (tip: install “less” - less is better than more, if that makes
any sense).</p>

<p>For example, I had just installed “flite” and was amazed at how
functional it was. I wondered to myself whether it was just a front-end
to festival - but how to find out?</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ apt-rdepends flite
</code></pre></div></div>

<p>Which comes back with no other speech synthesis engine (e.g. festival),
so clearly flite is a speech synthesis engine in its own right.</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Automatically Process New Files With Fsniper</title>
      <link>https://chrisjrob.com/2011/02/24/automatically-process-new-files-with-fsniper/</link>
      <pubDate>Thu, 24 Feb 2011 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2011/02/24/automatically-process-new-files-with-fsniper</guid>
      <description>
       <![CDATA[
         
         <p>I had never heard of fsniper until it was mentioned in a mailing list
today, but it sounds excellent:</p>

<p><a href="http://www.linux.com/archive/feature/150200">Linux.com :: Automatically process new files with fsniper</a></p>

<p>Now I am wondering if I can use it to prompt an rsync to sync our shared
documents to our remote site, and <a href="http://bio-geeks.com/?p=662" title="Bio-Geeks">it seems I could</a>.  This is a major
headache for me, as we have two branches and a shared documents
repository.</p>

<!--more-->

<p>I have previously tried using <a href="http://www.cis.upenn.edu/~bcpierce/unison/" title="Unison File Syncrhonizer">Unison</a>
to synchronise between the branches, but this has created a lot of load
on the master server, massively slowing down performance for all the
users on the master server.</p>

<p>We are currently using apache2+webdav+svn but this is not working well
for us.  Potentially fsniper+rsync could work very well.  It would
clearly need to be running on both servers and I can see a potential
clash if the two servers happen to try and update the same file at the
same time.  More thought is required, any suggestions - do let me know.</p>

<p>The links to the fsniper site seem to be outdated in both of the above
articles, but it may be found at:</p>

<p><a href="http://freshmeat.net/projects/fsniper">http://freshmeat.net/projects/fsniper</a></p>

<p>It doesn’t appear to be packaged for Debian, but I will probably compile
from source and try it soon.  More later!</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Archive to DAT</title>
      <link>https://chrisjrob.com/2010/11/23/archive-to-dat/</link>
      <pubDate>Tue, 23 Nov 2010 16:25:08 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2010/11/23/archive-to-dat</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/freecom-dat.jpg" align="right" alt="Featured Image">
         
         <p>I am a rank amateur at both tar and mt.  This page constitutes no more than you could discover yourself by reading the manpages for tar and mt, or Googling.</p>

<p>You have been warned!</p>

<h2 id="simple-instructions">Simple instructions</h2>

<!--more-->

<h3 id="rewind">Rewind</h3>

<p><strong>Use the rewind command before backup to ensure that you are overwriting previous backups.</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 rewind
</code></pre></div></div>

<h3 id="backup">Backup</h3>

<p>E.g. directory <code class="language-plaintext highlighter-rouge">/www</code> and <code class="language-plaintext highlighter-rouge">/home</code> with tar command (z - compressed)</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># tar -czf /dev/st0 /www /home
</code></pre></div></div>

<p><strong>Use -v to receive verbose feedback.</strong></p>

<h3 id="backup-with-verify">Backup with Verify</h3>

<p>If you do not compress your backup, then you can verify in the same process:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># tar -cWf /dev/st0 /www /home
</code></pre></div></div>

<h3 id="where">Where</h3>

<p>Find out what block you are at with mt command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 tell
</code></pre></div></div>

<p>This does not appear to work on my version of the software.  In theory the Status option has a line for <code class="language-plaintext highlighter-rouge">block number=</code>, but surprising after completing a backup it seems to still return 0.  If I ever work out why this is, I will update this entry.</p>

<h3 id="list">List</h3>

<p>Display list of files on tape drive:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># tar -tzf /dev/st0
</code></pre></div></div>

<h3 id="restore">Restore</h3>

<p>E.g. <code class="language-plaintext highlighter-rouge">/www</code> directory</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># cd /
# mt -f /dev/st0 rewind
# tar -xzf /dev/st0 www
</code></pre></div></div>

<p>E.g. <code class="language-plaintext highlighter-rouge">/home/test</code> directory</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># cd /
# mt -f /dev/st0 rewind
# tar --checkpoint -xvvzpkf /dev/st0 home/test

--checkpoint : provide occasional checkpoint messages
-x : extract
-v : verbosely
-v : even more verbosly
-z : uncompress
-p : retaining permissions
-k : leaving existing files alone
-f /dev/st0 home/test
</code></pre></div></div>

<h3 id="unload">Unload</h3>

<p>Unload the tape:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 offline
</code></pre></div></div>

<h3 id="status">Status</h3>

<p>Display status information about the tape unit:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 status
</code></pre></div></div>

<h3 id="erase">Erase</h3>

<p>Erasing the tape may take hours and there is not normally any need to do this; simply rewind the tape before performing backup, or use the mt command to position at the beginning of the tape.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 erase
</code></pre></div></div>

<h3 id="moving-about-the-tape">Moving about the tape</h3>

<p>You can go BACKWARD or FORWARD on tape with mt command itself</p>

<h4 id="go-to-end-of-data">Go to end of data</h4>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 eod
</code></pre></div></div>

<h4 id="goto-previous-record">Goto previous record</h4>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 bsfm 1
</code></pre></div></div>

<h4 id="forward-record">Forward record</h4>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 fsf 1
</code></pre></div></div>

<h2 id="restore-1">Restore</h2>

<p>This code has not been checked or tested:</p>

<ul>
  <li>Check status of tape: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0</code> status</li>
  <li>Go to the directory where you want to restore your file(s).</li>
  <li>Go to the right file on the tape with the following commands:
    <ul>
      <li>Check file number and position in file: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0 status</code></li>
      <li>Advance one file: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0 fsf 1</code></li>
      <li>View contents of tar-file: <code class="language-plaintext highlighter-rouge">tar -tvf /dev/st0</code></li>
      <li>Go back one file: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0 bsf 1</code></li>
      <li>
        <p>If you are in the last block of a file and you should be at the beginning of the file, do the following:</p>

        <p><code class="language-plaintext highlighter-rouge">mt -f /dev/st0 bsf 1</code></p>

        <p><code class="language-plaintext highlighter-rouge">mt -f /dev/st0 fsf 1</code></p>
      </li>
      <li>And check with: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0 status</code></li>
    </ul>
  </li>
  <li>Extract your file(s): <code class="language-plaintext highlighter-rouge">tar -xvf /dev/st0 ~files~</code></li>
  <li>Rewind and eject tape: <code class="language-plaintext highlighter-rouge">mt -f /dev/st0 offline</code></li>
</ul>

<h2 id="hardware-compression">Hardware Compression</h2>

<p>It may be possible to switch off and on the hardware compression on the:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mt -f /dev/st0 compression 0
# mt -f /dev/st0 compression 1
</code></pre></div></div>

<p>Other people report replacing the 0 with “off” and 1 with “on”.</p>

<h2 id="references">References</h2>

<ul>
  <li>man tar</li>
  <li>man mt</li>
  <li><a href="http://www.cyberciti.biz/faq/linux-tape-backup-with-mt-and-tar-command-howto/">Howto: Linux Tape Backup with MT and TAR commands</a></li>
  <li><a href="http://www.cs.inf.ethz.ch/stricker/lab/linux_tape.html">How to use the DAT-tape with Linux</a></li>
  <li><a href="http://www.cyberciti.biz/faq/unix-verify-tape-backup/">Verify tar command tape backup under Linux or UNIX</a></li>
  <li><a href="http://www.cyberciti.biz/faq/tape-drives-naming-convention-under-linux/">Tape drives naming convention under Linux</a></li>
  <li><a href="http://www.cyberciti.biz/faq/backup-home-directories-in-linux/">Backup home directories in Linux</a></li>
  <li><a href="http://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/">Howto: Use tar command through network over ssh session</a></li>
  <li><a href="http://www.cyberciti.biz/faq/rhel-centos-debian-set-tape-blocksize/">Linux Set the Block Size for a SCSI Tape Device</a></li>
</ul>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | DD over SSH</title>
      <link>https://chrisjrob.com/2009/10/09/dd-over-ssh/</link>
      <pubDate>Fri, 09 Oct 2009 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/10/09/dd-over-ssh</guid>
      <description>
       <![CDATA[
         
         <p>Wow, can’t believe my last post was 4 months ago, well a quick tip to get me back into the blogging frame of mind.  If you wish to take a drive image copy over the network, then apparently you do not have to have an nfs share available.  Instead you can use ssh as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dd if=/dev/sda bs=1M | ssh root@blah "cat &gt; /root/disk.img"
</code></pre></div></div>

<!--more-->

<p>Haven’t tried it yet, but it sounds incredible.  The <code class="language-plaintext highlighter-rouge">bs=1M</code> is essential or the copy will take forever.</p>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Apache SSL</title>
      <link>https://chrisjrob.com/2009/05/19/apache-ssl/</link>
      <pubDate>Tue, 19 May 2009 12:02:19 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/05/19/apache-ssl</guid>
      <description>
       <![CDATA[
         
         <p>How to enable Apache SSL:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># a2enmod ssl
# cd /etc/apache2/
# mkdir ssl
# openssl req -new -x509 -days 3650 -nodes -out /etc/apache2/ssl/apache.pem -keyout /etc/apache2/ssl/apache.pem
# chmod 600 /etc/apache2/ssl/apache.pem
# /etc/init.d/apache2 restart
</code></pre></div></div>

<!--more-->

<p><strong>When generating the certificate it prompts for “Your name”, this should be the name of your site e.g. host.example.com.</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.pem
</code></pre></div></div>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Analyse Boot Speed</title>
      <link>https://chrisjrob.com/2009/04/29/analyse-boot-speed/</link>
      <pubDate>Wed, 29 Apr 2009 19:45:16 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/04/29/analyse-boot-speed</guid>
      <description>
       <![CDATA[
         
           <img src="https://chrisjrob.com/assets/bootchart.png" align="right" alt="Featured Image">
         
         <h2 id="step-1---install-bootchart">Step 1 - Install bootchart</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># aptitude install bootchart
</code></pre></div></div>

<h2 id="step-2---update-grub">Step 2 - Update Grub</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># nano /boot/grub/menu.lst
</code></pre></div></div>

<!--more-->

<p>Look for a line that looks like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># altoptions=(single-user mode) single
</code></pre></div></div>

<p>And add a similar line after it that looks like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># altoptions=(single-user mode) single
# altoptions=(bootchart) init=/sbin/bootchartd
</code></pre></div></div>

<p>Then update your grub entries by typing:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># update-grub
</code></pre></div></div>

<h2 id="step-3---reboot">Step 3 - Reboot</h2>

<p>Reboot, making sure that you select the bootchart option from the grub menu.</p>

<p>Once the boot has finished, open a terminal and type:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ bootchart
</code></pre></div></div>

<h2 id="step-4---analyse">Step 4 - Analyse</h2>

<p>You chould find a new png image in your current directory (the one you ran the bootchart command from) called bootchart.png.  Open it, analyse it and write a really good howto on how to optimise your system, and then publish it on the web!</p>

<p>Good luck!</p>


       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Copy Directories &amp; Preserve Permissions</title>
      <link>https://chrisjrob.com/2009/03/21/copy-directories-and-preserve-permissions/</link>
      <pubDate>Sat, 21 Mar 2009 05:58:15 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/03/21/copy-directories-and-preserve-permissions</guid>
      <description>
       <![CDATA[
         
         <h2 id="the-command">The Command</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /
$ tar cf - opt | (cd /archive; tar xf - )
</code></pre></div></div>

<p><strong>You cannot run this command as “sudo”, if you need root access for your copy, then you will need to execute a “sudo su” or log in as root.</strong></p>

<!--more-->

<h2 id="what-the-command-will-do">What the command will do</h2>

<p>With any command that you are given by someone, you should always check what that command will do:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ tar cf - opt | (cd /archive; tar xf - )
c = create
f - = file stdout
opt = source path
| = pipe all above to...
(
cd /archive;
x = extract
f - = file stdin
)
</code></pre></div></div>

<p>So this command will pipe a new archive from opt to stdout, which it will then recreate in /archive.</p>

<p>This  will copy <code class="language-plaintext highlighter-rouge">/opt</code> into <code class="language-plaintext highlighter-rouge">/archive/opt</code>, preserving permissions, file modification times etc.</p>

<p>Read man tar for more details.</p>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Create Thumbnails from Movies</title>
      <link>https://chrisjrob.com/2009/03/15/create-thumbnails-from-movies/</link>
      <pubDate>Sun, 15 Mar 2009 20:08:39 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/03/15/create-thumbnails-from-movies</guid>
      <description>
       <![CDATA[
         
         <h2 id="introduction">Introduction</h2>

<p>Sometimes you want to catalogue your movies with thumbnail images from the movie.</p>

<h2 id="the-solution">The Solution</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ffmpeg -itsoffset -240  -i themovie.mpg -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 thumbnail.jpg
$ ffmpeg -itsoffset -240  -i themovie.mpg -vcodec png -vframes 1 -an -f rawvideo -s 320x240 thumbnail.png
</code></pre></div></div>

<!--more-->

<h2 id="example-script">Example Script</h2>

<p>The following example script should be saved in <code class="language-plaintext highlighter-rouge">/usr/local/bin</code> or somewhere in your path.  As you can see, this script will run through all the movies in the current directory and create thumbnails in the default mythtv MythVideo directory.</p>

<p>This script was written to perform a particular task, and a more generic script would be better.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="c"># Creates a thumbnail of an mpeg</span>

<span class="k">for </span>a <span class="k">in</span> <span class="k">*</span>.mpg<span class="p">;</span> <span class="k">do
    if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">"</span><span class="nv">$a</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
        </span><span class="nv">b</span><span class="o">=</span><span class="s2">"</span><span class="k">${</span><span class="nv">a</span><span class="p">%.mpg</span><span class="k">}</span><span class="s2">"</span>
        <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">"/home/mythtv/.mythtv/MythVideo/</span><span class="k">${</span><span class="nv">b</span><span class="k">}</span><span class="s2">.jpg"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
            </span><span class="nb">echo</span> <span class="s2">"/home/mythtv/.mythtv/MythVideo/</span><span class="k">${</span><span class="nv">b</span><span class="k">}</span><span class="s2">.jpg already exists"</span>
        <span class="k">else
            </span>ffmpeg <span class="nt">-itsoffset</span> <span class="nt">-240</span>  <span class="nt">-i</span> <span class="s2">"</span><span class="nv">$a</span><span class="s2">"</span> <span class="nt">-vcodec</span> mjpeg <span class="nt">-vframes</span> 1 <span class="nt">-an</span> <span class="nt">-f</span> rawvideo <span class="nt">-s</span> 320x240 <span class="s2">"/home/mythtv/.mythtv/MythVideo/</span><span class="k">${</span><span class="nv">b</span><span class="k">}</span><span class="s2">.jpg"</span>
        <span class="k">fi
    fi
done</span>
</code></pre></div></div>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Howto | Convert DVR-MS to MPEG</title>
      <link>https://chrisjrob.com/2009/03/10/convert-dvr-ms-to-mpeg/</link>
      <pubDate>Tue, 10 Mar 2009 22:02:23 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2009/03/10/convert-dvr-ms-to-mpeg</guid>
      <description>
       <![CDATA[
         
         <h2 id="simple-bash-script-to-convert-all-dvr-ms-files-in-a-directory">Simple bash script to convert all dvr-ms files in a directory</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#! /bin/bash</span>
<span class="k">for </span>a <span class="k">in</span> <span class="k">*</span>.dvr-ms<span class="p">;</span> <span class="k">do
    if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">"</span><span class="nv">$a</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
        </span><span class="nv">b</span><span class="o">=</span><span class="s2">"</span><span class="k">${</span><span class="nv">a</span><span class="p">%.dvr-ms</span><span class="k">}</span><span class="s2">"</span>
        ffmpeg <span class="nt">-i</span> <span class="s2">"</span><span class="nv">$a</span><span class="s2">"</span> <span class="nt">-vcodec</span> copy <span class="nt">-acodec</span> copy <span class="s2">"</span><span class="k">${</span><span class="nv">b</span><span class="k">}</span><span class="s2">.mpg"</span>
    <span class="k">fi
done</span>
</code></pre></div></div>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Command line XML validator</title>
      <link>https://chrisjrob.com/2008/12/18/command-line-xml-validator/</link>
      <pubDate>Thu, 18 Dec 2008 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2008/12/18/command-line-xml-validator</guid>
      <description>
       <![CDATA[
         
         <p>I have always used the <a href="http://www.w3schools.com/XML/xml_validator.asp">W3Schools On-line XML validator</a>, but have always found it unreliable and I’ve never got it to validate against an XML schema file.</p>

<p>Thanks to Google I came across the following command (part of <code class="language-plaintext highlighter-rouge">libxml2-utils</code>):</p>

<!--more-->

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ xmllint --noout --schema schema.xsd file.xml
</code></pre></div></div>

<p>I find it amazing that all this time I had the perfect command pre-installed on my Linux desktop and I never knew.</p>

       ]]>
      </description>
    </item>
    
    <item>
      <title>Command line PDF tool pdftk</title>
      <link>https://chrisjrob.com/2008/12/09/command-line-pdf-tool-pdftk/</link>
      <pubDate>Tue, 09 Dec 2008 00:00:00 +0000</pubDate>
      <author>chrisjrob@gmail.com (Chris Roberts)</author>
      <guid>https://chrisjrob.com/2008/12/09/command-line-pdf-tool-pdftk</guid>
      <description>
       <![CDATA[
         
         <p>I had a 25-page OpenOffice writer document that needed to be sent to as a pdf.  Obviously creating a pdf from OpenOffice is simple enough, but I wanted to insert within the final pdf additional pages from other documents (i.e. not just simply appended on the end).</p>

<!--more-->

<p>We often use pdftk for command line pdf, but I hadn’t delved deeply into the features.  But using pdftk all I had to do was:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pdftk A=main.pdf B=2nd.pdf C=3rd.pdf cat A1-24 B A25 C output final.pdf
</code></pre></div></div>

<p>In other words the final document (final.pdf) is pages 1-24 of document A (main.pdf), the whole of document B (2nd.pdf), page 25 of document A (main.pdf) and the whole of document C (3rd.pdf).</p>

<p>For a command line program, I think that is stunningly intuitive.  And best of all it was instantaneous and there was no loss of quality.</p>

<p>pdftk can seemingly do just about anything with pdfs, including encrypt, decrypt, repair, burst and rotate.</p>

<p>pdftk is installable from the Debian repos and typing <code class="language-plaintext highlighter-rouge">pdftk --help</code> gives you a handy set of usage examples, so that you don’t have to re-learn it every time you use it.</p>

       ]]>
      </description>
    </item>
    
  </channel> 
</rss>
