Tag-Archive for » Puppet «

Tuesday, September 14th, 2010 | Author:

eth0I recently switched from puppet daemon to mcollective commander : it kicks the “stuck in outer space puppet daemon” feature out of the way and brings me nice features (as load control). To do so I deployed the puppetd agent over my boxes.

As most of the sysadmins say : “if it’s not monitored, it doesn’t exist” I had placed a script in cron based on the puppetlast script to report by mail once a day which hosts had not checked in during the last 30 minutes. This method had 2 serious flaws : it runs only once a day (I hate being flooded by mails) and test machines keep nagging you until you remove the yaml files on the puppetmaster. Talking with Volcane on #mcollective I discovered that the agent was able to report when the client last run, so I decided to use this to check my puppet runs freshness with a nagios plugin.

Good bye cron job, say hello nagios.

Category: Code, NetAdmin, Puppet, SysAdmin  | Tags: , ,  | Comments off
Wednesday, September 08th, 2010 | Author:

eth0 Most people that work with puppet use a VCS : subversion, git, CVS, mercurial… Pick yours. My company uses subversion and each cmmit to the repository needs to be pulled by the master. Since I have two masters, I also want them to be synchronized. Once again it’s mcollective that comes to the rescue. I wrote a very simple agent (a 5 minutes work, to be improved) that can update a specified path. Grab it here. Once it is deployed you can use a post commit hook that calls it.

Example of mine :

#!/usr/local/bin/ruby
 
require 'mcollective'
include MCollective::RPC
 
mc = rpcclient("svnagent")
mc.progress = false
mc.class_filter "puppet::master"
mc.update(:path => "/etc/puppet")

The agent will only be called on machines being puppet masters by using the class filter.

Category: Code, Puppet, SysAdmin  | Tags: , ,  | 2 Comments
Friday, February 26th, 2010 | Author:

kermitOn my Solaris machines at $WORK I use iMil‘s pkgin to install additional software. But until today, I add to do it by hand, on every machine… Not really what I like to do after a little more than a year using puppet. So I wrote a provider to manage packages with pkgin. It was very informative on puppet internals and I learned more about my favorite config management system.

Enough talking, here is the file : pkgin.rb

Example of use in a manifest :

class foo {
    package { "bla":
        ensure => installed,
        provider => pkgin
    }
}
Monday, February 22nd, 2010 | Author:

Absolut_nginxAt $WORK I started using Nginx a while ago, first as a front end to my mongrel instances for puppet. Recently I began to use it for one of its most know features : reverse proxy (and caching too). Of course this work had to be puppetized !

This is a summary of what I’ve done :

  • Basic setup
  • Automatic setup of the status page, exploited by a munin plugin
  • An “include” directory, can be specific to a host through the usual $fqdn source selection system (as well as the nginx.conf file).
  • A “reverse proxy” specific class that uses a template embedding some ruby (see the previous post). My cache dir is under tmpfs, to speed up the whole thing.

This setup is mostly inspired by this post. I use a local dnsmasq setup to resolve both internal & external requests. This way I can manage vhosts being accessible from inside ou outside our network. It’s incredibly flexible and allows you to get the most from your infrastructure.

The puppet class :

# @name : nginx
# @desc : classe de base pour nginx
# @info : nil
class nginx
{
 package { "nginx":
 ensure => installed
 }
 
 service { "nginx":
 ensure => running
 }
 
 file { "nginx.conf":
 name => "/etc/nginx/nginx.conf",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/$fqdn/nginx-rp-secure.conf", "puppet://$fileserver/files/apps/nginx/nginx-rp-secure.conf"],
 ensure => present,
 notify => Service["nginx"]
 }
 
 # status is installed on all nginx boxens
 file { "nginx-status":
 name => "/etc/nginx/sites-enabled/nginx-status",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/nginx-status", "puppet://$fileserver/files/apps/nginx/$fqdn/nginx-status"],
 ensure => present,
 notify => Service["nginx"]
 }
 
 # include dir, get the freshness here
 file { "include_dir":
 name => "/etc/nginx/includes",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/includes.$fqdn", "puppet://$fileserver/files/apps/nginx/includes"],
 ensure => directory,
 recurse => true,
 notify => Service["nginx"],
 ignore => ".svn*"
 }
 
 # files managed by hand, no matter if it breaks
 file { "sites-managed":
 name => "/etc/nginx/sites-managed",
 owner => root,
 group => root,
 ensure => directory
 }
}
 
# @name : nginx::reverseproxy
# @desc : config nginx pour reverse proxy
# @info : utilisée en conjonction avec dnsmasq local
class nginx::reverseproxy
{
 include nginx
 include dnsmasq::reverseproxy
 
 # Vars used by the template below
 $mysqldatabase=extlookup("mysqldatabase")
 $mysqllogin=extlookup("mysqllogin")
 $mysqlpassword=extlookup("mysqlpassword")
 $mysqlserver=extlookup("mysqlserver")
 
 file { "nginx-cachedir":
 name => "/dev/shm/nginx-cache",
 owner => www-data,
 group => www-data,
 ensure => directory
 }
 
 file { "site_reverse-proxy":
 name => "/etc/nginx/sites-enabled/reverse-proxy",
 owner => root,
 group => root,
 content => template("nginx/$fqdn/reverse-proxy.erb"),
 ensure => present,
 notify => Service["nginx"],
 require => File["nginx-cachedir"]
 }
 
}

This is the munin plugins that are automatically distributed with the box.

One of the generated graphs :

nginx_requests-day

Category: BOFH Life, Puppet, SysAdmin  | Tags: , , ,  | One Comment
Wednesday, January 27th, 2010 | Author:

Today I started installing a reverse proxy at $WORK. I choose to follow this way, and all my DNS data is stored in my CMDB. Once again, the solution came from #puppet ! You can embed some “pure” ruby code in ERB templates. And, yes, you can query your database !

<%
dbh = DBI.connect("DBI:Mysql:yourbase:mysql.mycorp.com", "you", "XXXX")
query = dbh.prepare("your fancy query")
query.execute
while row = query.fetch do
todisplay=some_funny_things()
%>
<%= todisplay %>
<% end %>

I use this technique to generate the dnsmasq data file. Just use the subscribe function and all is done !

Wednesday, August 05th, 2009 | Author:

Disclaimer : this work is mostly based upon DavidS work, available on his git repo. In the scope of my work I needed to have munin support for freeBSD & Solaris. I also wrote a class for snmp_plugins & custom plugins. Some things are quite dependant from my infrastructure, like munin.conf generation script but it can easily be adapted to yours, by extracting data from your CMDB.

It requires the munin_interfaces fact published here (and merged into DavidS repo, thanks to him), and Volcane’s extlookup function to store some parameters. Enough talking, this is the code :

# Munin config class
# Many parts taken from David Schmitt's http://git.black.co.at/
# FreeBSD & Solaris + SNMP & custom plugins support by Nicolas Szalay <nico@gcu.info>
 
class munin::node {
	case $operatingsystem {
		openbsd: {}
		debian: { include munin::node::debian}
		freebsd: { include munin::node::freebsd}
		solaris: { include munin::node::solaris}
		default: {}
	}
}
 
class munin::node::debian {
 
	package { "munin-node": ensure => installed }
 
	file { 
	"/etc/munin":
		ensure => directory,
		mode => 0755,
		owner => root,
		group => root;
 
	"/etc/munin/munin-node.conf":
		source => "puppet://$fileserver/files/apps/munin/munin-node-debian.conf",
		owner => root,
		group => root,
		mode => 0644,
		before => Package["munin-node"],
		notify => Service["munin-node"],
	}
 
	service { "munin-node": ensure => running }
 
	include munin::plugins::linux 
}
 
class munin::node::freebsd {
	package { "munin-node": ensure => installed, provider => freebsd }
 
        file { "/usr/local/etc/munin/munin-node.conf":
                source => "puppet://$fileserver/files/apps/munin/munin-node-freebsd.conf",
                owner => root,
                group => wheel,
                mode => 0644,
                before => Package["munin-node"],
                notify => Service["munin-node"],
        }
 
	service { "munin-node": ensure => running }
 
	include munin::plugins::freebsd
}
 
class munin::node::solaris {
	# "hand made" install, no package.
	file { "/etc/munin/munin-node.conf":
		source => "puppet://$fileserver/files/apps/munin/munin-node-solaris.conf",
                owner => root,
                group => root,
                mode => 0644
	}
 
	include munin::plugins::solaris
}
 
class munin::gatherer {
	package { "munin":
		ensure => installed
	}
 
	# custom version of munin-graph : forks & generates many graphs in parallel
	file { "/usr/share/munin/munin-graph":
		owner => root,
		group => root,
		mode => 0755,
		source => "puppet://$fileserver/files/apps/munin/gatherer/munin-graph",
		require => Package["munin"]
	}
 
	# custon version of debian cron file. Month & Year cron are generated once daily
	file { "/etc/cron.d/munin":
		owner => root,
		group => root,
		mode => 0644,
		source => "puppet://$fileserver/files/apps/munin/gatherer/munin.cron",
		require => Package["munin"]
	}
 
	# Ensure cron is running, to fetch every 5 minutes
	service { "cron":
		ensure => running
	}
 
	# Ruby DBI for mysql
	package { "libdbd-mysql-ruby":
		ensure => installed
	}
 
	# config generator
	file { "/opt/scripts/muningen.rb":
		owner => root,
		group => root,
		mode => 0755,
		source => "puppet://$fileserver/files/apps/munin/gatherer/muningen.rb",
		require => Package["munin", "libdbd-mysql-ruby"]
	}	
 
	# regenerate munin's gatherer config every hour
	cron { "munin_config":
		command => "/opt/scripts/muningen.rb > /etc/munin/munin.conf",
		user => "root",
		minute => "0",
		require => File["/opt/scripts/muningen.rb"]
	}
 
	include munin::plugins::snmp
	include munin::plugins::linux
	include munin::plugins::custom::gatherer
}
 
 
# define to create a munin plugin inside the right directory
define munin::plugin ($ensure = "present") {
 
	case $operatingsystem {
		freebsd: { 
			$script_path = "/usr/local/share/munin/plugins"
			$plugins_dir = "/usr/local/etc/munin/plugins"
		}
		debian: { 
			$script_path = "/usr/share/munin/plugins"
			$plugins_dir = "/etc/munin/plugins"
		}
		solaris: { 
			$script_path = "/usr/local/munin/lib/plugins"
			$plugins_dir = "/etc/munin/plugins"
		}
		default: { }
	}
 
	$plugin = "$plugins_dir/$name"
 
	case $ensure {
		"absent": {
			debug ( "munin_plugin: suppressing $plugin" )
			file { $plugin: ensure => absent, } 
		}
 
		default: {
			$plugin_src = $ensure ? { "present" => $name, default => $ensure }
 
			file { $plugin:
				ensure => "$script_path/${plugin_src}",
				require => Package["munin-node"],
				notify => Service["munin-node"],
			}
		}
	}
}
 
# snmp plugin define, almost same as above
define munin::snmp_plugin ($ensure = "present") {
	$pluginname = get_plugin_name($name)
 
	case $operatingsystem {
		freebsd: { 
			$script_path = "/usr/local/share/munin/plugins"
			$plugins_dir = "/usr/local/etc/munin/plugins"
		}
		debian: { 
			$script_path = "/usr/share/munin/plugins"
			$plugins_dir = "/etc/munin/plugins"
		}
		solaris: { 
			$script_path = "/usr/local/munin/lib/plugins"
			$plugins_dir = "/etc/munin/plugins"
		}
		default: { }
	}
 
	$plugin = "$plugins_dir/$name"
 
	case $ensure {
		"absent": {
			debug ( "munin_plugin: suppressing $plugin" )
			file { $plugin: ensure => absent, } 
		}
 
		"present": {
			file { $plugin:
				ensure => "$script_path/${pluginname}",
				require => Package["munin-node"],
				notify => Service["munin-node"],
			}
		}
	}
}
 
class munin::plugins::base
{
	case $operatingsystem {
		debian: { $plugins_dir = "/etc/munin/plugins" }
		freebsd: { $plugins_dir = "/usr/local/etc/munin/plugins" }
		solaris: { $plugins_dir = "/etc/munin/plugins" }
		default: {}
	}
 
	file { $plugins_dir:
		source => "puppet://$fileserver/files/empty",
		ensure => directory,
		checksum => mtime,
		ignore => ".svn*",
		mode => 0755,
		recurse => true,
		purge => true,
		force => true,
		owner => root
	}
}
 
class munin::plugins::interfaces
{
	$ifs = gsub(split($munin_interfaces, " "), "(.+)", "if_\\1")
	$if_errs = gsub(split($munin_interfaces, " "), "(.+)", "if_err_\\1")
	plugin {
		$ifs: ensure => "if_";
		$if_errs: ensure => "if_err_";
	}
 
	include munin::plugins::base
}
 
class munin::plugins::linux 
{
	plugin { [ cpu, load, memory, swap, irq_stats, df, processes, open_files, ntp_offset, vmstat ]: 
		ensure => "present"
	}
 
	include munin::plugins::base
	include munin::plugins::interfaces
}
 
class munin::plugins::nfsclient
{
	plugin { "nfs_client":
		ensure => present
	}
}
 
class munin::plugins::snmp
{
	# initialize plugins
	$snmp_plugins=extlookup("munin_snmp_plugins")
	snmp_plugin { $snmp_plugins:
		ensure => present
	}
 
	# SNMP communities used by plugins
	file { "/etc/munin/plugin-conf.d/snmp_communities":
		owner => root,
		group => root,
		mode => 0644,
		source => "puppet://$fileserver/files/apps/munin/gatherer/snmp_communities"
	}
 
}
 
define munin::custom_plugin($ensure = "present", $location = "/etc/munin/plugins") {
	$plugin = "$location/$name"
 
	case $ensure {
		"absent": {
			file { $plugin: ensure => absent, } 
		}
 
		"present": {
			file { $plugin:
				owner => root,
				mode => 0755,
				source => "puppet://$fileserver/files/apps/munin/custom_plugins/$name",
				require => Package["munin-node"],
				notify => Service["munin-node"],
			}
		}
	}
}
 
class munin::plugins::custom::gatherer
{
	$plugins=extlookup("munin_custom_plugins")
	custom_plugin { $plugins:
		ensure => present
	}
}
 
class munin::plugins::freebsd 
{
	plugin { [ cpu, load, memory, swap, irq_stats, df, processes, open_files, ntp_offset, vmstat ]: 
		ensure => "present",
	}
 
	include munin::plugins::base
	include munin::plugins::interfaces
}
 
class munin::plugins::solaris 
{
	# Munin plugins on solaris are quite ... buggy. Will need rewrite / custom plugins.
	plugin { [ cpu, load, netstat ]: 
		ensure => "present",
	}
 
	include munin::plugins::base
	include munin::plugins::interfaces
}
Category: BOFH Life, Code, Puppet, SysAdmin, Tech  | Tags: ,  | 7 Comments
Thursday, July 30th, 2009 | Author:

-Post en anglais, pour une fois-

Everyone using puppet knows DavidS awesome git repository : git.black.co.at. Unfornately for me, his puppet infrastructure seems to be almost only linux based. I have different OS in mine, including FreeBSD & OpenSolaris. Looking at his module-munin I decided to reuse it (and not recreate the wheel) but he used a custom fact that needed some little work. So this is a FreeBSD & (Open)Solaris capable version, to know what network interfaces have link up

# return the set of active interfaces as an array
# taken from http://git.black.co.at
# modified by nico <nico@gcu.info> to add FreeBSD & Solaris support
 
Facter.add("munin_interfaces") do
 
	setcode do
		# linux
		if Facter.value('kernel') == "Linux" then
			`ip -o link show`.split(/\n/).collect do |line|
					value = nil
					matches = line.match(/^\d*: ([^:]*): <(.*,)?UP(,.*)?>/)
					if !matches.nil?
						value = matches[1]
						value.gsub!(/@.*/, '')
					end
					value
			end.compact.sort.join(" ")
		#end
 
		# freebsd
		elsif Facter.value('kernel') == "FreeBSD" then
			Facter.value('interfaces').split(/,/).collect do |interface|
				status = `ifconfig #{interface} | grep status`
				if status != "" then
					status=status.strip!.split(":")[1].strip!
					if status == "active" then # I CAN HAZ LINK ?
						interface.to_a
					end
				end
			end.compact.sort.join(" ")
		#end
 
		# solaris
		elsif Facter.value('kernel') == "SunOS" then
			Facter.value('interfaces').split(/,/).collect do |interface|
				if interface != "lo0" then # /dev/lo0 does not exists
					status = `ndd -get /dev/#{interface} link_status`.strip!
					if status == "1" # ndd returns 1 for link up, 0 for down
						interface.to_a
					end
				end
			end.compact.sort.join(" ")
		end
	end
end

Thanks to Volcane from IRC for helping me.

Category: BOFH Life, Code, Puppet, SysAdmin, Tech  | Tags: ,  | 74 Comments
Monday, July 06th, 2009 | Author:

Il y a des trucs, ça relève du bon sens mais on y pense pas toujours. Dans le monde des sysadmins, il y en a qui militent pour ce qu’on appelle “l’agilité”. C’est en fait appliquer certaines techniques à la base mises en place pour le développement au monde de l’administration système. Personnellement, je trouve que ce sont essentiellement des conseils de bon sens, l’expérience qui parle en somme.

Une de ces “bonnes pratiques” est que le code est la documentation, ce qui évite d’avoir à la réécrire – parce que c’est chiant avouons le – voire même d’oublier de l’écrire, ce qui arrive souvent.

Comme je suis en pleine frénésie puppet-ienne, j’écris des classes à tout va mais pas la doc qui va avec.. et ça PAS BIEN. Je me suis donc lancé dans un petit projet pour documenter mes classes puppet vite fait bien comme une loutre.

Il suffit d’ajouter au dessus de chaque classe (ou en dessous ou peu importe, le code est crade et ne fait pas la différence) des commentaires formattés comme suit :

# @name : debian
# @desc : classe de base pour debian
# @info : ne pas affecter, sera incluse automatiquement

Pour que le script sorte les données dans un tableau façon dokuwiki en 3 colonnes.

Le script va ensuite poster les infos dans le dokuwiki via XML-RPC dans la page indiquée par le script. Tout ce que contient la page sera écrasé, ce n’est pas de l'”append”.

Personnellement je conjugue ce script avec le plugin “include” de dokuwiki. J’ai donc la syntaxe suivante dans une des pages de mon wiki :

{{page>:auto_puppetclasses}}

Le XML-RPC est désactivé par défaut dans dokuwiki, pour l’activer ajoutez dans conf/local.php

$conf['xmlrpc'] = 1;

Et enfin, le script en ruby avec des bouts de XML-RPC dedans (dispo dans le package libruby sous debian). Warning, code sale.

#!/usr/bin/env ruby
#
# Automagicaly adds puppet classes in the dokuwiki corporate documentation
# For use with the "include" plugin
# By nico <nico@gcu.info>
 
require 'xmlrpc/client'
 
xmlrpc_url = "http://XXXXX/wiki/lib/exe/xmlrpc.php"
basedirs = [ "/home/nico/sysadmin/puppet/manifests/classes", "/home/nico/sysadmin/puppet/manifests/os"]
destination_page = "auto_puppetclasses"
 
server = XMLRPC::Client.new2(xmlrpc_url)
 
def doku_class(classfile, final_page)
	fp=File.open(classfile)
	fp.readlines().each { |line|
 
		line=line.gsub("n","")
 
		if line =~ /@name/
			final_page += "| " + line[10,line.length]
		end
 
		if line =~ /@desc/
			final_page += " | " + line[10,line.length] + " | "
		end
 
		if line =~ /@info/
			final_page += line[10,line.length] + " |n"
		end
	}
 
	return final_page
end
 
final_page=""
 
# output some dokuwiki thing
final_page += "===== Classes disponibles =====n"
final_page +=  "n"
final_page +=  "^ Nom de la classe ^ Description rapide ^ Infos supplémentaires ^n"
 
basedirs.each{ |basedir|
	Dir.new(basedir).entries.each { |entry|
		if File.file? basedir+"/"+entry then
			final_page=doku_class(basedir+"/"+entry,final_page)
		end
	}
}
 
server.call2("wiki.putPage", destination_page, final_page, "", 0)
Monday, June 22nd, 2009 | Author:

J’en avais parlé il y a quelques temps avec des lutins donc le voici, mon /etc/puppet. Ne vous attendez pas à quelque chose d’exceptionnel puisque ça ne suis pas les best practices, notamment le découpage en modules. Accessoirement certaines classes ont besoin d’être réécrites (souvent les premières que j’ai conçues).

Sont inclus mon script d’external node et le script ext_lookup de Volcane (de #puppet, crédits à l’intérieur). La partie fichiers est vide car trop relou de chercher les trucs à ne pas divulguer, mais la structure est là.

Have fun.

Archive /etc/puppet

Category: BOFH Life, Puppet, SysAdmin  | Tags: ,  | 6 Comments
Friday, May 22nd, 2009 | Author:

Petite suite à mon précédent billet sur l’utilisation d’un LB devant puppet. En 0.24.5, il existe un bug gênant si vous utilisez les external nodes : personnellement la conf des nodes est stockées dans une base mysql et c’est un script qui me sort le YAML correspond à la config.

Pour résoudre ce bug qui laisse les connexions en CLOSE_WAIT (cf le netstat) je vous conseille vivement de passer en version 0.24.8. Via du pinning apt cela ne pose aucun soucis.

Category: Puppet, SysAdmin  | Tags: , ,  | 61 Comments