Tag-Archive for » SysAdmin «

Wednesday, August 18th, 2010 | Author:

eth0 In my last post I’ve been talking about mcollective (check the new website) and mongo and how awesome it is.

I’ve mentioned the docs available in the wiki but I’ve been too fast on this point; it’s not yet in the wiki (only some parts) so here is a little “guide” about setting the pieces work together. All comments are welcome.

  • Deploy meta.rb on all your  nodes. It will make the metadata available to other nodes.
  • Add the following lines to your server.cfg file :
registration = Meta
registerinterval = 300
  • Install a mongoDB server (debian ships it in squeeze)
  • Deploy the mongo registration agent on one node (don’t be like me, do not start deploying it on all nodes !). It will “suck” metadata from nodes, and insert it to the mongo database.
  • Add the following lines to your server.cfg on the host with the registration agent :
plugin.registration.mongohost = mongo.mycorp.net
plugin.registration.mongodb = puppet
plugin.registration.collection = nodes
  • Connect to your mongoDB and enjoy :
$ mongo mongo.mycorp.net/puppet
MongoDB shell version: 1.6.0
connecting to: mongo.mycorp.net/puppet
> db.nodes.find().count()
59
  • You’re done !
Category: BOFH Life, SysAdmin, Tech  | Tags: , , ,  | Comments off
Wednesday, June 23rd, 2010 | Author:

eth0If you read this blog you may know I daily use puppet at $WORK. Puppet is made to maintain configuration on machines, but not for one shot actions. For a couple of month I used to work with fabric for this but It had a few drawbacks, mainly because you need to maintain the list of hosts you want to act on and that it hates dead hosts, even if ghantoos dropped a link showing how to get rid of this. So I replaced it with mcollective : not exactly the same (it uses an agent instead of SSH) but it dynamicaly knows which hosts are up, allows to filter on facts (from facter, the puppet companion).

You could think that needing an agent is a drawback but it allows much complex actions, fine grained logic. Moreover it does not require much work to be installed if you already have puppet running. If you have a high number of machines you gain scalability with real parallel actions : it does not take longer to run on 5 machines than on 100.

One more point is the active development : R.I. Pienaar released 2 versions recently, improving access control, adding DDL to create interfaces easily. You can give your unprivileged staff some power through a web interface in less than 100 lines of code.

A tool for sysadmins that appreciate the devops spirit and want to do more in less time !

I’ll be publishing my agent and related tools on my github account.

Category: BOFH Life, SysAdmin, Tech  | Tags: , , ,  | Comments off
Wednesday, April 14th, 2010 | Author:

eth0I already blogged about my experiments with mcollective & xen but I had something a little bigger in my mind. A friend had sent me a video showing some vmware neat features (DRS mainly) with VMs migrating through hypervisors automatically.

So I wrote a “proof of concept” of what you can do with an awesome tool like mcollective. The setup of this funny game is the following :

  • 1 box used a iSCSI target that serves volumes to the world
  • 2 xen hypervisors (lenny packages) using open-iscsi iSCSI initiator to connect to the target. VMs are stored in LVM, nothing fancy

The 3 boxens are connected on a 100Mb network and the hypervisors have an additionnal gigabit network card with a crossover cable to link them (yes, this is a lab setup). You can find a live migration howto here.

For the mcollective part I used my Xen agent (slightly modified from the previous post to support migration), which is based on my xen gem. The client is the largest part of the work but it’s still less than 200 lines of code. It can (and will) be improved because all the config is hardcoded. It would also deserve a little DSL to be able to handle more “logic” than “if load is superior to foo” but as I said before, it’s a proof of concept.

Let’s see it in action :

hypervisor2:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   233     2     r-----    873.5
hypervisor3:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   232     2     r-----  78838.0
test1                                        6   256     1     -b----     18.4
test2                                        4   256     1     -b----     19.3
test3                                       20   256     1     r-----     11.9

test3 is a VM that is “artificially” loaded, as is the machine “hypervisor3” (to trigger migration)

[mordor:~] ./mc-xen-balancer
[+] hypervisor2 : 0.0 load and 0 slice(s) running
[+] init/reset load counter for hypervisor2
[+] hypervisor2 has no slices consuming CPU time
[+] hypervisor3 : 1.11 load and 3 slice(s) running
[+] added test1 on hypervisor3 with 0 CPU time (registered 18.4 as a reference)
[+] added test2 on hypervisor3 with 0 CPU time (registered 19.4 as a reference)
[+] added test3 on hypervisor3 with 0 CPU time (registered 18.3 as a reference)
[+] sleeping for 30 seconds

[+] hypervisor2 : 0.0 load and 0 slice(s) running
[+] init/reset load counter for hypervisor2
[+] hypervisor2 has no slices consuming CPU time
[+] hypervisor3 : 1.33 load and 3 slice(s) running
[+] updated test1 on hypervisor3 with 0.0 CPU time eaten (registered 18.4 as a reference)
[+] updated test2 on hypervisor3 with 0.0 CPU time eaten (registered 19.4 as a reference)
[+] updated test3 on hypervisor3 with 1.5 CPU time eaten (registered 19.8 as a reference)
[+] sleeping for 30 seconds

[+] hypervisor2 : 0.16 load and 0 slice(s) running
[+] init/reset load counter for hypervisor2
[+] hypervisor2 has no slices consuming CPU time
[+] hypervisor3 : 1.33 load and 3 slice(s) running
[+] updated test1 on hypervisor3 with 0.0 CPU time eaten (registered 18.4 as a reference)
[+] updated test2 on hypervisor3 with 0.0 CPU time eaten (registered 19.4 as a reference)
[+] updated test3 on hypervisor3 with 1.7 CPU time eaten (registered 21.5 as a reference)
[+] hypervisor3 has 3 threshold overload
[+] Time to see if we can migrate a VM from hypervisor3
[+] VM key : hypervisor3-test3
[+] Time consumed in a run (interval is 30s) : 1.7
[+] hypervisor2 is a candidate for being a host (step 1 : max VMs)
[+] hypervisor2 is a candidate for being a host (step 2 : max load)
trying to migrate test3 from hypervisor3 to hypervisor2 (10.0.0.2)
Successfully migrated test3 !

Let’s see our hypervisors :

hypervisor2:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   233     2     r-----    878.9
test3                                       25   256     1     -b----      1.1
hypervisor3:~# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   232     2     r-----  79079.3
test1                                        6   256     1     -b----     18.4
test2                                        4   256     1     -b----     19.4

A little word about configuration options :

  • interval : the poll time in seconds.  this should not be too low, let the machine some time and avoid load peeks to distort the logic.
  • load_threshold : where you consider the machine load is too high and that it is time to move some stuff away (tampered with max_over, see below)
  • daemonize : not used yet
  • max_over : maximum time (in minutes) where load should be superior to the limit. When reached, it’s time, really. Don’t set it too low and at least 2*interval or sampling will not be efficient
  • debug : well….
  • max_vm_per_host : the maximum VMs a host can handle. If a host already hit this limit it will not be candidate for receiving a VM
  • max_load_candidate : same thing as above, but for the load
  • host_mapping : a simple CSV file to handle non-DNS destinations (typically my crossover cable address have no DNS entries)

What is left to do :

  • Add some barriers to avoid migration madness to let load go down after a migration or to avoid migrating a VM permanently
  • Add a DSL to insert some more logic
  • Write a real client, not a big fat loop

Enjoy the tool !

Files :

Monday, February 22nd, 2010 | Author:

Absolut_nginxAt $WORK I started using Nginx a while ago, first as a front end to my mongrel instances for puppet. Recently I began to use it for one of its most know features : reverse proxy (and caching too). Of course this work had to be puppetized !

This is a summary of what I’ve done :

  • Basic setup
  • Automatic setup of the status page, exploited by a munin plugin
  • An “include” directory, can be specific to a host through the usual $fqdn source selection system (as well as the nginx.conf file).
  • A “reverse proxy” specific class that uses a template embedding some ruby (see the previous post). My cache dir is under tmpfs, to speed up the whole thing.

This setup is mostly inspired by this post. I use a local dnsmasq setup to resolve both internal & external requests. This way I can manage vhosts being accessible from inside ou outside our network. It’s incredibly flexible and allows you to get the most from your infrastructure.

The puppet class :

# @name : nginx
# @desc : classe de base pour nginx
# @info : nil
class nginx
{
 package { "nginx":
 ensure => installed
 }
 
 service { "nginx":
 ensure => running
 }
 
 file { "nginx.conf":
 name => "/etc/nginx/nginx.conf",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/$fqdn/nginx-rp-secure.conf", "puppet://$fileserver/files/apps/nginx/nginx-rp-secure.conf"],
 ensure => present,
 notify => Service["nginx"]
 }
 
 # status is installed on all nginx boxens
 file { "nginx-status":
 name => "/etc/nginx/sites-enabled/nginx-status",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/nginx-status", "puppet://$fileserver/files/apps/nginx/$fqdn/nginx-status"],
 ensure => present,
 notify => Service["nginx"]
 }
 
 # include dir, get the freshness here
 file { "include_dir":
 name => "/etc/nginx/includes",
 owner => root,
 group => root,
 source => [ "puppet://$fileserver/files/apps/nginx/includes.$fqdn", "puppet://$fileserver/files/apps/nginx/includes"],
 ensure => directory,
 recurse => true,
 notify => Service["nginx"],
 ignore => ".svn*"
 }
 
 # files managed by hand, no matter if it breaks
 file { "sites-managed":
 name => "/etc/nginx/sites-managed",
 owner => root,
 group => root,
 ensure => directory
 }
}
 
# @name : nginx::reverseproxy
# @desc : config nginx pour reverse proxy
# @info : utilisée en conjonction avec dnsmasq local
class nginx::reverseproxy
{
 include nginx
 include dnsmasq::reverseproxy
 
 # Vars used by the template below
 $mysqldatabase=extlookup("mysqldatabase")
 $mysqllogin=extlookup("mysqllogin")
 $mysqlpassword=extlookup("mysqlpassword")
 $mysqlserver=extlookup("mysqlserver")
 
 file { "nginx-cachedir":
 name => "/dev/shm/nginx-cache",
 owner => www-data,
 group => www-data,
 ensure => directory
 }
 
 file { "site_reverse-proxy":
 name => "/etc/nginx/sites-enabled/reverse-proxy",
 owner => root,
 group => root,
 content => template("nginx/$fqdn/reverse-proxy.erb"),
 ensure => present,
 notify => Service["nginx"],
 require => File["nginx-cachedir"]
 }
 
}

This is the munin plugins that are automatically distributed with the box.

One of the generated graphs :

nginx_requests-day

Category: BOFH Life, Puppet, SysAdmin  | Tags: , , ,  | One Comment
Wednesday, December 16th, 2009 | Author:

à $WORK je réinstalle des Solaris 10, et il y a des petits trucs qui font que l’environnement n’est pas directement convivial, surtout quand on a l’habitude d’une debian où il y a mass wrappers. Heureusement, le dernier né de tonton iMil est dispo et fonctionne du tonnerre. De même, il n’y a pas de sudo disponible par défaut, mais pfexec est là pour le remplacer très avantageusement. Petit “pfexec for dummies” nécessaire. Allez, un petit coup de ZFS pour continuer et tout sera de nouveau brillant & luisant.

Monday, July 06th, 2009 | Author:

Il y a des trucs, ça relève du bon sens mais on y pense pas toujours. Dans le monde des sysadmins, il y en a qui militent pour ce qu’on appelle “l’agilité”. C’est en fait appliquer certaines techniques à la base mises en place pour le développement au monde de l’administration système. Personnellement, je trouve que ce sont essentiellement des conseils de bon sens, l’expérience qui parle en somme.

Une de ces “bonnes pratiques” est que le code est la documentation, ce qui évite d’avoir à la réécrire – parce que c’est chiant avouons le – voire même d’oublier de l’écrire, ce qui arrive souvent.

Comme je suis en pleine frénésie puppet-ienne, j’écris des classes à tout va mais pas la doc qui va avec.. et ça PAS BIEN. Je me suis donc lancé dans un petit projet pour documenter mes classes puppet vite fait bien comme une loutre.

Il suffit d’ajouter au dessus de chaque classe (ou en dessous ou peu importe, le code est crade et ne fait pas la différence) des commentaires formattés comme suit :

# @name : debian
# @desc : classe de base pour debian
# @info : ne pas affecter, sera incluse automatiquement

Pour que le script sorte les données dans un tableau façon dokuwiki en 3 colonnes.

Le script va ensuite poster les infos dans le dokuwiki via XML-RPC dans la page indiquée par le script. Tout ce que contient la page sera écrasé, ce n’est pas de l'”append”.

Personnellement je conjugue ce script avec le plugin “include” de dokuwiki. J’ai donc la syntaxe suivante dans une des pages de mon wiki :

{{page>:auto_puppetclasses}}

Le XML-RPC est désactivé par défaut dans dokuwiki, pour l’activer ajoutez dans conf/local.php

$conf['xmlrpc'] = 1;

Et enfin, le script en ruby avec des bouts de XML-RPC dedans (dispo dans le package libruby sous debian). Warning, code sale.

#!/usr/bin/env ruby
#
# Automagicaly adds puppet classes in the dokuwiki corporate documentation
# For use with the "include" plugin
# By nico <nico@gcu.info>
 
require 'xmlrpc/client'
 
xmlrpc_url = "http://XXXXX/wiki/lib/exe/xmlrpc.php"
basedirs = [ "/home/nico/sysadmin/puppet/manifests/classes", "/home/nico/sysadmin/puppet/manifests/os"]
destination_page = "auto_puppetclasses"
 
server = XMLRPC::Client.new2(xmlrpc_url)
 
def doku_class(classfile, final_page)
	fp=File.open(classfile)
	fp.readlines().each { |line|
 
		line=line.gsub("n","")
 
		if line =~ /@name/
			final_page += "| " + line[10,line.length]
		end
 
		if line =~ /@desc/
			final_page += " | " + line[10,line.length] + " | "
		end
 
		if line =~ /@info/
			final_page += line[10,line.length] + " |n"
		end
	}
 
	return final_page
end
 
final_page=""
 
# output some dokuwiki thing
final_page += "===== Classes disponibles =====n"
final_page +=  "n"
final_page +=  "^ Nom de la classe ^ Description rapide ^ Infos supplémentaires ^n"
 
basedirs.each{ |basedir|
	Dir.new(basedir).entries.each { |entry|
		if File.file? basedir+"/"+entry then
			final_page=doku_class(basedir+"/"+entry,final_page)
		end
	}
}
 
server.call2("wiki.putPage", destination_page, final_page, "", 0)
Monday, June 22nd, 2009 | Author:

J’en avais parlé il y a quelques temps avec des lutins donc le voici, mon /etc/puppet. Ne vous attendez pas à quelque chose d’exceptionnel puisque ça ne suis pas les best practices, notamment le découpage en modules. Accessoirement certaines classes ont besoin d’être réécrites (souvent les premières que j’ai conçues).

Sont inclus mon script d’external node et le script ext_lookup de Volcane (de #puppet, crédits à l’intérieur). La partie fichiers est vide car trop relou de chercher les trucs à ne pas divulguer, mais la structure est là.

Have fun.

Archive /etc/puppet

Category: BOFH Life, Puppet, SysAdmin  | Tags: ,  | 6 Comments