Managing IOS Configuration Snippets

Elle Janet Plato plato at wisc.edu
Sun Mar 2 20:56:10 UTC 2014


Robert,

>> sgrep can dump out a "stanza" of ios-like config, then you can rcsdiff
>> that to your master, per 'chunk' of config.
>> Dale
>>
>
> I'm digging the idea of your command.   Along the same lines I've got this
> awk snippet that I made and then forgot about.  It functions like the cisco
> pipe begin/end commands:
>

The clever part comes when you write something to tweak sgrep to spit out files with the commands you need to run to reconfigure multiple devices, and then use the "make" command to invoke clogin in parallel to push those changes.  Showing a complete example would get tedious, I have a tarball of the minimum fileset needed to do this lying around somewhere.  Dale probably has one as well.

The basic workflow is as follows:

1) Given a set of configuration files named device.conf, generate a set of configuration changes to be made, and place those changes in a set of files named device.cmd.  Device is the valid hostname of a router/switch you need to configure.  Device.conf is the configuration file for that device, for example from rancid.  The contents of device.cmd are valid config commands including "config term", "exit" and "write mem", that you intend to run on the device.  We will talk about what to do with those command files in a later step.

So if you wanted to edit the name of vlan100 on all of your routers, you might do this:
  for device in `ls r-*.conf | cut -d. -f1`;
      do sgrep -is "^vlan 10\n" $device.conf | sed -e 's/name .*/name new-name/' | grep -v Found > $device.cmd;
      done

r-myrouter-100.cmd might contain:
vlan 10
 name new-name
!

So you would need to make your loop more complex to add the "conf t", "exit" and "wr mem" parts, or just use a script.  I write a script called mkcmdfile to wrap sgrep every time I need to do work like this.  mkcmdfile means "make command file".  If you want to work from the command line, add a second sed rule to look for "Found" and replace it with "conf t", eliminate the trailing grep -v "Found" and edit the original sed to replace vlan new-name with vlan new-name\nexit\nwr mem/.  I don't recommend doing that, but you could.  I put that into a wrapper script, or I post process the cmd files with a second for loop.

Now that is the less hard way to change 4,000 devices.  Except for made up examples like renaming a vlan on 4,000 devices, that is only a little less complicated than just doing it by hand, but hopefully you get the idea.

One huge win though, even when doing it by hand as above, I can generate the files ahead of time, ensure they are correct, and then when I push them at 4AM I do not have to worry about typographic errors.  If you have change approval boards, the CAB can examine the .cmd files if they choose to, this is a great way to sanity check your work at 2PM when your mind is clear.

  Loops are cool, but in reality, I am far more likely to edit sgrep with some reasonable defaults, hand it a list of filenames, and then rely on $ARGV changing to reflect the name of the file it is currently working on, so I know that when I have a diamond loop (for (<>) { do stuff }) in my Perl, $ARGV.conf is what sgrep is reading from, and $ARGV.cmd is the file sgrep should be writing to.  In fact, I have something I call mkcmdfile that does pretty much that.

Writing a custom sgrep for each change allows me to consider the entire contents of the router, including its name, the business rules associated with it, anything I keep in a database, etc, when making configuration changes to it.

Don't worry too much about how to do step 1.  The lesson you must understand, is that step 1 is finished when you have a directory full of files, all named device.cmd, where device is the name of the device to work on, and the contents are the commands to execute.

2) Given a directory full of command files (named device.cmd) invoke clogin on each file:
   The hard way would be another for loop:
   for device in `ls *.cmd | cut -d. -f1`; 
       do clogin -s $device.cmd $device > $device.log;
       done

   I go into how to do this easily using make later.

3) Look at the resulting $device.log files you generated, and make sure nothing crazy happened.  




  It really is that simple.  I can create cmd files for 4,000 devices, ensure the cmd files are changing the correct interface properties, ntp server names or whatever, and push them to all 4,000 devices, in a few hours.  Doing that by hand would take a week.  In some cases, I can push them in 30 minutes.

I have the minimum fileset you need to do that lying around somewhere, but basically that is it.


When I wrote sgrep, I was asking myself, what would happen if I make a tool that had more computing power than a tool that works on regular expressions?  grep gets its name from: global - 'regular expression' - print.  But regular expressions are limited in the computing complexity they can reflect.  I really wanted something turing complete.  But I never found a way to do it, that was less complex than inventing a new language.  Eventually I decided to have it think in terms of stanzas, and the operations you could perform on a stanza.  I started thinking about structure.  When you understand structure, you can look at things with more finesse.  Thanks to Dale, sgrep understands networks, so this means you can look for OSPF stanzas with network statements that contain a network range that would include a specific IP address.  If you grep for 10.1.2.3 in a router, you won't find the OSPF stanza with network 10.1.2.0 255.255.255.0, but sgrep will.

I am thinking about a template language for sgrep, and once I write it I will let Dale know and he can put it on his page.  The thing is, start with grep, and ask what has more power than a regular expression?  Adding context is a logical next step, REs are context free.  Then ask, does using this structure with more capability result in clarity in how I express my will, and is it easy to understand, especially for people familiar with normal unix tools.

Anyway, to expand on automating this with make a little.  Dave Plonka made some cool makefiles, and we've been tweaking them every since.

Step 1 is always figuring out how to generate command files.  I have a mkcmdfile script I use and customize often for anything more complicated than a password change, or vlan edit.

Step 2 is always pushing those device.cmd files to the various devices they refer to.

Step 3 is always validating your work.


Let us expand on step 2.

When done with step 1, you have a bunch of device.cmd files, but you are no closer to getting work done than when you started.  But you need to understand the first step before the second step makes sense.  So given a directory full of .cmd files, you want to run those commands on the associated devices.

   - The hard way would be to invoke clogin directly (ensuring the file has conf t and write mem in it).
     for $device in `ls *.cmd | cut -d. -f1`; do clogin -s $device.cmd; done

   - The slightly less hard way would be to create a makefile with the following in it, and by hand add a rule to make all the .log files:

cat > Makefile
clogin = path-to/clogin2 -f path-to/.cloginrc
.SUFFIXES: .cmd .log

DIR = /home/netconfig-user/cms

TMPupgrade: device1.log, device2.log, device3.log, repeat for several thousand devices, hire an intern, a typist, whatever.

.cmd.log:
        @echo BEGIN .cmd.log $@
        base='$*'; \
        $(clogin) $${clogin_timeout:+-t$${clogin_timeout}} -x $< $${base%%_*} > $@ || (rm -f $@; exit 1)
        @echo END .cmd.log $@
ctrl-D

Sure, that looks easy to read.  Did my modem disconnect?  It just says, if you run "make device.log" it will look for a command file named device.cmd, and compile it using the 'clogin' compiler, with the options:

    clogin -t timeout -x device.cmd device > device.log, the or (||) means, if the command fails, instead of crashing just rm the .log file and continue with the next file. 


Given that, you could invoke make and it would be off and running, but that gets awfully tedious for one thousand devices.

    $configbox>  make -j 20 TMPupgrade.

Boom, 20 at a time your system is clogin-ing into devices and pushing commmands.

So the better way is you add an extra rule to make the makefile you want, and then use that to invoke clogin:

cat > Makefile
clogin = path-to/clogin2 -f path-to/.cloginrc
.SUFFIXES: .cmd .log

DIR = /home/netconfig-user/cms

upgrade.make:
        @print "TMPupgrade: " $$($(ls) *.cmd | sed -e 's/\.cmd$$/.log/')" \
        \n\nupgrade: " $$($(ls) *.cmd | sed -e 's/\.cmd$$/.log/')" \
        \n\ninclude ~net/cms/Makefile" > $@

.cmd.log:
        @echo BEGIN .cmd.log $@
        base='$*'; \
        $(clogin) $${clogin_timeout:+-t$${clogin_timeout}} -x $< $${base%%_*} > $@ || (rm -f $@; exit 1)
        @echo END .cmd.log $@
ctrl-D

Given the above makefile, you can make a new makefile with the correct targets.

    $configbox> make upgrade

This will use sed to replace the .cmd with a .log for every device.cmd file, and create a new makefile with a rule to make all the .log files.  Again, I should have the correct minimum fileset lying around somewhere, if someone wants help I can try and make a better explanation and post it.

Once you have the new makefile, just use it.

$configbox> make -f upgrade.make -j 20

The process really is pretty damn simple, but the details are kind of tedious.

Once you get used to it, you can whip out scripts to do complex validation rules in no time flat.  On Dale's page you will also find some code I got from Cisco to merge catos/IOS configs from a 6500 in hybrid mode to a unified IOS only mode, and merge them with a template.  I fixed some small bugs in it, and got permission to open source it.  That code parses IOS configs fairly elegantly and shoves them into Perl data structures, which you can then use to make savvy choices with respect to configuration auditing.  The code is open source, have at it.  You can validate the configs on your devices, and best of all, you can validate your business logic.  If you name all your primary routers something odd, and your backups something even, you can validate the ospf metrics on primary are different than the ospf metrics on the secondary, because the script understands not only the configuration, but how the device fits into the network, and what business logic applies to the configuration.

I hope this makes things clearer, instead of less clear.

Cheers,

Elle Janet Plato
NS Application Developer/Consultant Network Engineer
University of Wisconsin, Madison
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7348 bytes
Desc: not available
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20140302/a09e755b/attachment.bin>


More information about the NANOG mailing list