fix gitignore

This commit is contained in:
Max Erenberg 2021-09-09 00:31:28 -04:00 committed by Mirror
parent decba6d28a
commit 6eea0c6584
80 changed files with 9051 additions and 1 deletions

1
config/ADDRESS Normal file
View File

@ -0,0 +1 @@
129.97.134.71

1
config/ADDRESS_V6 Normal file
View File

@ -0,0 +1 @@
2620:101:f000:4901:c5c::f:1055

7
debian/.bash_logout vendored Normal file
View File

@ -0,0 +1,7 @@
# ~/.bash_logout: executed by bash(1) when login shell exits.
# when leaving the console clear the screen to increase privacy
if [ "$SHLVL" = 1 ]; then
[ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
fi

11
debian/.bash_profile vendored Normal file
View File

@ -0,0 +1,11 @@
# ~/.bash_profile: executed by bash(1) for login shells.
# include .bashrc if it exists
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# set PATH so it includes user's private bin if it exists
if [ -d ~/bin ] ; then
PATH=~/bin:"${PATH}"
fi

46
debian/.bashrc vendored Normal file
View File

@ -0,0 +1,46 @@
# ~/.bashrc: executed by bash(1) for non-login shells.
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
export HISTCONTROL=ignoreboth
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(lesspipe)"
# A little nice prompt.
PS1='\[\033[01;33m\][`git branch 2>/dev/null|cut -f2 -d\* -s` ]\[\033[01;32m\]\u@\[\033[00;36m\]\h\[\033[01m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD/$HOME/~}\007"'
;;
*)
;;
esac
# Alias definitions.
# enable color support of ls and also add handy aliases
eval "`dircolors -b`"
alias ls='ls --color=auto'
alias ll='ls -l'
alias la='ls -A'
alias l='ls -CF'
alias cp='cp -i'
alias mv='mv -i'
alias ..='cd ..'
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi

257
debian/README vendored Normal file
View File

@ -0,0 +1,257 @@
Archvsync
=========
This is the central repository for the Debian mirror scripts. The scripts
in this repository are written for the purposes of maintaining a Debian
archive mirror (and shortly, a Debian bug mirror), but they should be
easily generalizable.
Currently the following scripts are available:
* ftpsync - Used to sync an archive using rsync
* runmirrors - Used to notify leaf nodes of available updates
* dircombine - Internal script to manage the mirror user's $HOME
on debian.org machines
* typicalsync - Generates a typical Debian mirror
* udh - We are lazy, just a shorthand to avoid typing the
commands, ignore... :)
Usage
=====
For impatient people, short usage instruction:
- Create a dedicated user for the whole mirror.
- Create a seperate directory for the mirror, writeable by the new user.
- Place the ftpsync script in the mirror user's $HOME/bin (or just $HOME)
- Place the ftpsync.conf.sample into $HOME/etc as ftpsync.conf and edit
it to suit your system. You should at the very least change the TO=
and RSYNC_HOST lines.
- Create $HOME/log (or wherever you point $LOGDIR to)
- Setup the .ssh/authorized_keys for the mirror user and place the public key of
your upstream mirror into it. Preface it with
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="~/bin/ftpsync",from="IPADDRESS"
and replace $IPADDRESS with that of your upstream mirror.
- You are finished
In order to receive different pushes or syncs from different archives,
name the config file ftpsync-$ARCHIVE.conf and call the ftpsync script
with the commandline "sync:archive:$ARCHIVE". Replace $ARCHIVE with a
sensible value. If your upstream mirror pushes you using runmirrors
bundled together with this sync script, you do not need to add the
"sync:archive" parameter to the commandline, the scripts deal with it
automatically.
Debian mirror script minimum requirements
=========================================
As always, you may use whatever scripts you want for your Debian mirror,
but we *STRONGLY* recommend you to not invent your own. However, if you
want to be listed as a mirror it *MUST* support the following minimal
functionality:
- Must perform a 2-stage sync
The archive mirroring must be done in 2 stages. The first rsync run
must ignore the index files. The correct exclude options for the
first rsync run are:
--exclude Packages* --exclude Sources* --exclude Release* --exclude ls-lR*
The first stage must not delete any files.
The second stage should then transfer the above excluded files and
delete files that no longer belong on the mirror.
Rationale: If archive mirroring is done in a single stage, there will be
periods of time during which the index files will reference files not
yet mirrored.
- Must not ignore pushes whil(e|st) running.
If a push is received during a run of the mirror sync, it MUST NOT
be ignored. The whole synchronization process must be rerun.
Rationale: Most implementations of Debian mirror scripts will leave the
mirror in an inconsistent state in the event of a second push being
received while the first sync is still running. It is likely that in
the near future, the frequency of pushes will increase.
- Should understand multi-stage pushes.
The script should parse the arguments it gets via ssh, and if they
contain a hint to only sync stage1 or stage2, then ONLY those steps
SHOULD be performed.
Rationale: This enables us to coordinate the timing of the first
and second stage pushes and minimize the time during which the
archive is desynchronized. This is especially important for mirrors
that are involved in a round robin or GeoDNS setup.
The minimum arguments the script has to understand are:
sync:stage1 Only sync stage1
sync:stage2 Only sync stage2
sync:all Do everything. Default if none of stage1/2 are
present.
There are more possible arguments, for a complete list see the
ftpsync script in our git repository.
ftpsync
=======
This script is based on the old anonftpsync script. It has been rewritten
to add flexibilty and fix a number of outstanding issues.
Some of the advantages of the new version are:
- Nearly every aspect is configurable
- Correct support for multiple pushes
- Support for multi-stage archive synchronisations
- Support for hook scripts at various points
- Support for multiple archives, even if they are pushed using one ssh key
- Support for multi-hop, multi-stage archive synchronisations
Correct support for multiple pushes
-----------------------------------
When the script receives a second push while it is running and syncing
the archive it won't ignore it. Instead it will rerun the
synchronisation step to ensure the archive is correctly synchronised.
Scripts that fail to do that risk ending up with an inconsistent archive.
Can do multi-stage archive synchronisations
-------------------------------------------
The script can be told to only perform the first or second stage of the
archive synchronisation.
This enables us to send all the binary packages and sources to a
number of mirrors, and then tell all of them to sync the
Packages/Release files at once. This will keep the timeframe in which
the mirrors are out of sync very small and will greatly help things like
DNS RR entries or even the planned GeoDNS setup.
Multi-hop, multi-stage archive synchronisations
-----------------------------------------------
The script can be told to perform a multi-hop multi-stage archive
synchronisation.
This is basically the same as the multi-stage synchronisation
explained above, but enables the downstream mirror to push his own
staged/multi-hop downstreams before returning. This has the same
advantage than the multi-stage synchronisation but allows us to do
this over multiple level of mirrors. (Imagine one push going from
Europe to Australia, where then locally 3 others get updated before
stage2 is sent out. Instead of 4times transferring data from Europe to
Australia, just to have them all updated near instantly).
Can run hook scripts
--------------------
ftpsync currently allows 5 hook scripts to run at various points of the
mirror sync run.
Hook1: After lock is acquired, before first rsync
Hook2: After first rsync, if successful
Hook3: After second rsync, if successful
Hook4: Right before leaf mirror triggering
Hook5: After leaf mirror trigger (only if we have slave mirrors; HUB=true)
Note that Hook3 and Hook4 are likely to be called directly after each other.
The difference is that Hook3 is called *every* time the second rsync
succeeds even if the mirroring needs to re-run due to a second push.
Hook4 is only executed if mirroring is completed.
Support for multiple archives, even if they are pushed using one ssh key
------------------------------------------------------------------------
If you get multiple archives from your upstream mirror (say Debian,
Debian-Backports and Volatile), previously you had to use 3 different ssh
keys to be able to automagically synchronize them. This script can do it
all with just one key, if your upstream mirror tells you which archive.
See "Commandline/SSH options" below for further details.
For details of all available options, please see the extensive documentation
in the sample configuration file.
Commandline/SSH options
=======================
Script options may be set either on the local command line, or passed by
specifying an ssh "command". Local commandline options always have
precedence over the SSH_ORIGINAL_COMMAND ones.
Currently this script understands the options listed below. To make them
take effect they MUST be prepended by "sync:".
Option Behaviour
stage1 Only do stage1 sync
stage2 Only do stage2 sync
all Do a complete sync (default)
mhop Do a multi-hop sync
archive:foo Sync archive foo (if the file $HOME/etc/ftpsync-foo.conf
exists and is configured)
callback Call back when done (needs proper ssh setup for this to
work). It will always use the "command" callback:$HOSTNAME
where $HOSTNAME is the one defined in config and
will happen before slave mirrors are triggered.
So, to get the script to sync all of the archive behind bpo and call back when
it is complete, use an upstream trigger of
ssh $USER@$HOST sync:all sync:archive:bpo sync:callback
Mirror trace files
==================
Every mirror needs to have a 'trace' file under project/trace.
The file format is as follows:
The filename has to be the full hostname (eg. hostname -f), or in the
case of a mirror participating in RR DNS (where users will never use
the hostname) the name of the DNS RR entry, eg. security.debian.org
for the security rotation)
The content has (no leading spaces):
Sat Nov 8 13:20:22 UTC 2008
Used ftpsync version: 42
Running on host: steffani.debian.org
First line: Output of date -u
Second line: Freeform text containing the program name and version
Third line: Text "Running on host: " followed by hostname -f
The third line MUST NOT be the DNS RR name, even if the mirror is part
of it. It MUST BE the hosts own name. This is in contrast to the filename,
which SHOULD be the DNS RR name.
runmirrors
==========
This script is used to tell leaf mirrors that it is time to synchronize
their copy of the archive. This is done by parsing a mirror list and
using ssh to "push" the leaf nodes. You can read much more about the
principle behind the push at [1], essentially it tells the receiving
end to run a pre-defined script. As the whole setup is extremely limited
and the ssh key is not usable for anything else than the pre-defined
script this is the most secure method for such an action.
This script supports two types of pushes: The normal single stage push,
as well as the newer multi-stage push.
The normal push, as described above, will simply push the leaf node and
then go on with the other nodes.
The multi-staged push first pushes a mirror and tells it to only do a
stage1 sync run. Then it waits for the mirror (and all others being pushed
in the same run) to finish that run, before it tells all of the staged
mirrors to do the stage2 sync.
This way you can do a nearly-simultaneous update of multiple hosts.
This is useful in situations where periods of desynchronization should
be kept as small as possible. Examples of scenarios where this might be
useful include multiple hosts in a DNS Round Robin entry.
For details on the mirror list please see the documented
runmirrors.mirror.sample file.
[1] http://blog.ganneff.de/blog/2007/12/29/ssh-triggers.html

62
debian/bin/dircombine vendored Executable file
View File

@ -0,0 +1,62 @@
#!/usr/bin/perl
# Uses symlinks to merge the files contained in a set of vcs
# checkouts to into a single directory. Keeps track of when files are
# removed from the merged directories and removes the symlinks.
#
# Only merges files that match the specified pattern.
#
# Note that the directories given to merge should be paths that will work
# for symlink targets from the destination directory (so either full paths,
# or they should be right inside the destination directory).
#
# Note that other files in the destination directory will be left as-is.
#
# Copyright 2006 by Joey Hess, licensed under the GPL.
if (! @ARGV) {
die "usage: dircombine include-pattern dest dir1 [dir2 ...]\n";
}
my $pattern=shift;
my $dest=shift;
foreach my $dir (@ARGV) {
my %known;
# Link in each thing from the dir.
opendir(DIR, $dir) || die "opendir: $!";
while ($_=readdir(DIR)) {
next if $_ eq '.' || $_ eq '..' || $_ eq 'known' || $_ eq '.svn' || $_ eq '.git' || $_ eq '.gitignore' || $_ eq '_darcs';
next unless /$pattern/;
$known{$_}=1;
if (! -l "$dest/$_" && -e "$dest/$_") {
print STDERR "$_ in $dir is also in $dest\n";
}
elsif (! -l "$dest/$_") {
system("ln", "-svf", "$dir/$_", $dest);
}
}
closedir(DIR);
# Remove anything that was previously linked in but is not in the
# dir anymore.
if (-e "$dir/known") {
open(KNOWN, "$dir/known") || die "open $dir/known: $!";
while (<KNOWN>) {
chomp;
if (! $known{$_}) {
system("rm", "-vf", "$dest/$_");
}
}
close KNOWN;
}
# Save state for next time.
open(KNOWN, ">$dir/known") || die "write $dir/known: $!";
foreach my $file (sort keys %known) {
print KNOWN "$file\n";
}
close KNOWN;
}

585
debian/bin/ftpsync vendored Executable file
View File

@ -0,0 +1,585 @@
#! /bin/bash
# No, we can not deal with sh alone.
set -e
set -u
# ERR traps should be inherited from functions too. (And command
# substitutions and subshells and whatnot, but for us the function is
# the important part here)
set -E
# ftpsync script for Debian
# Based losely on a number of existing scripts, written by an
# unknown number of different people over the years.
#
# Copyright (C) 2008,2009,2010,2011 Joerg Jaspert <joerg@debian.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; version 2.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
# In case the admin somehow wants to have this script located someplace else,
# he can set BASEDIR, and we will take that. If it is unset we take ${HOME}
# How the admin sets this isn't our place to deal with. One could use a wrapper
# for that. Or pam_env. Or whatever fits in the local setup. :)
BASEDIR=${BASEDIR:-"${HOME}"}
# Script version. DO NOT CHANGE, *unless* you change the master copy maintained
# by Joerg Jaspert and the Debian mirroradm group.
# This is used to track which mirror is using which script version.
VERSION="80387"
# Source our common functions
. "${BASEDIR}/etc/common"
########################################################################
########################################################################
## functions ##
########################################################################
########################################################################
# We want to be able to get told what kind of sync we should do. This
# might be anything, from the archive to sync, the stage to do, etc. A
# list of currently understood and valid options is below. Multiple
# options are seperated by space. All the words have to have the word
# sync: in front or nothing will get used!
#
# Option Behaviour
# stage1 Only do stage1 sync
# stage2 Only do stage2 sync
# all Do a complete sync
# mhop Do a mhop sync, usually additionally to stage1
# archive:foo Sync archive foo (if config for foo is available)
# callback Call back when done (needs proper ssh setup for this to
# work). It will always use the "command" callback:$HOSTNAME
# where $HOSTNAME is the one defined below/in config and
# will happen before slave mirrors are triggered.
#
# So to get us to sync all of the archive behind bpo and call back when
# we are done, a trigger command of
# "ssh $USER@$HOST sync:all sync:archive:bpo sync:callback" will do the
# trick.
check_commandline() {
while [ $# -gt 0 ]; do
case "$1" in
sync:stage1)
SYNCSTAGE1="true"
SYNCALL="false"
;;
sync:stage2)
SYNCSTAGE2="true"
SYNCALL="false"
;;
sync:callback)
SYNCCALLBACK="true"
;;
sync:archive:*)
ARCHIVE=${1##sync:archive:}
# We do not like / or . in the remotely supplied archive name.
ARCHIVE=${ARCHIVE//\/}
ARCHIVE=${ARCHIVE//.}
;;
sync:all)
SYNCALL="true"
;;
sync:mhop)
SYNCMHOP="true"
;;
*)
echo "Unknown option ${1} ignored"
;;
esac
shift # Check next set of parameters.
done
}
# All the stuff we want to do when we exit, no matter where
cleanup() {
trap - ERR TERM HUP INT QUIT EXIT
# all done. Mail the log, exit.
log "Mirrorsync done";
# Lets get a statistical value
SPEED="unknown"
if [ -f "${LOGDIR}/rsync-${NAME}.log" ]; then
SPEED=$(
SPEEDLINE=$(egrep '[0-9.]+ bytes/sec' "${LOGDIR}/rsync-${NAME}.log")
set "nothing" ${SPEEDLINE}
echo ${8:-""}
)
if [ -n "${SPEED}" ]; then
SPEED=${SPEED%%.*}
SPEED=$(( $SPEED / 1024 ))
fi
fi
log "Rsync transfer speed: ${SPEED} KB/s"
if [ -n "${MAILTO}" ]; then
# In case rsync had something on stderr
if [ -s "${LOGDIR}/rsync-${NAME}.error" ]; then
mail -e -s "[${PROGRAM}@$(hostname -s)] ($$) rsync ERROR on $(date +"%Y.%m.%d-%H:%M:%S")" ${MAILTO} < "${LOGDIR}/rsync-${NAME}.error"
fi
if [ "x${ERRORSONLY}x" = "xfalsex" ]; then
# And the normal log
MAILFILES="${LOG}"
if [ "x${FULLLOGS}x" = "xtruex" ]; then
# Someone wants full logs including rsync
MAILFILES="${MAILFILES} ${LOGDIR}/rsync-${NAME}.log"
fi
cat ${MAILFILES} | mail -e -s "[${PROGRAM}@$(hostname -s)] archive sync finished on $(date +"%Y.%m.%d-%H:%M:%S")" ${MAILTO}
fi
fi
savelog "${LOGDIR}/rsync-${NAME}.log"
savelog "${LOGDIR}/rsync-${NAME}.error"
savelog "$LOG" > /dev/null
rm -f "${LOCK}"
}
# Check rsyncs return value
check_rsync() {
ret=$1
msg=$2
# 24 - vanished source files. Ignored, that should be the target of $UPDATEREQUIRED
# and us re-running. If it's not, uplink is broken anyways.
case "${ret}" in
0) return 0;;
24) return 0;;
23) return 2;;
30) return 2;;
*)
error "ERROR: ${msg}"
return 1
;;
esac
}
########################################################################
########################################################################
# As what are we called?
NAME="$(basename $0)"
# The original command line arguments need to be saved!
if [ $# -gt 0 ]; then
ORIGINAL_COMMAND=$*
else
ORIGINAL_COMMAND=""
fi
SSH_ORIGINAL_COMMAND=${SSH_ORIGINAL_COMMAND:-""}
# Now, check if we got told about stuff via ssh
if [ -n "${SSH_ORIGINAL_COMMAND}" ]; then
# We deliberately add "nothing" and ignore it right again, to avoid
# people from outside putting some set options in the first place,
# making us parse them...
set "nothing" "${SSH_ORIGINAL_COMMAND}"
shift
# Yes, unqouted $* here. Or the function will only see it as one
# parameter, which doesnt help the case in it.
check_commandline $*
fi
# Now, we can locally override all the above variables by just putting
# them into the .ssh/authorized_keys file forced command.
if [ -n "${ORIGINAL_COMMAND}" ]; then
set ${ORIGINAL_COMMAND}
check_commandline $*
fi
# If we have been told to do stuff for a different archive than default,
# set the name accordingly.
ARCHIVE=${ARCHIVE:-""}
if [ -n "${ARCHIVE}" ]; then
NAME="${NAME}-${ARCHIVE}"
fi
# Now source the config for the archive we run on.
# (Yes, people can also overwrite the options above in the config file
# if they want to)
if [ -f "${BASEDIR}/etc/${NAME}.conf" ]; then
. "${BASEDIR}/etc/${NAME}.conf"
else
echo "Nono, you can't tell us about random archives. Bad boy!"
exit 1
fi
########################################################################
# Config options go here. Feel free to overwrite them in the config #
# file if you need to. #
# On debian.org machines the defaults should be ok. #
# #
# The following extra variables can be defined in the config file: #
# #
# ARCH_EXCLUDE #
# can be used to exclude a complete architecture from #
# mirrorring. Use as space seperated list. #
# Possible values are: #
# alpha, amd64, arm, armel, hppa, hurd-i386, i386, ia64, #
# mipsel, mips, powerpc, s390, sparc, kfreebsd-i386, kfreebsd-amd64 #
# and source. #
# eg. ARCH_EXCLUDE="alpha arm armel mipsel mips s390 sparc" #
# #
# An unset value will mirror all architectures #
########################################################################
########################################################################
# There should be nothing to edit here, use the config file #
########################################################################
MIRRORNAME=${MIRRORNAME:-$(hostname -f)}
# Where to put logfiles in
LOGDIR=${LOGDIR:-"${BASEDIR}/log"}
# Our own logfile
LOG=${LOG:-"${LOGDIR}/${NAME}.log"}
# Where should we put all the mirrored files?
TO=${TO:-"/org/ftp.debian.org/ftp/"}
# used by log() and error()
PROGRAM=${PROGRAM:-"${NAME}-$(hostname -s)"}
# Where to send mails about mirroring to?
if [ "x$(hostname -d)x" != "xdebian.orgx" ]; then
# We are not on a debian.org host
MAILTO=${MAILTO:-"root"}
else
# Yay, on a .debian.org host
MAILTO=${MAILTO:-"mirrorlogs@debian.org"}
fi
# Want errors only or every log?
ERRORSONLY=${ERRORSONLY:-"true"}
# Want full logs, ie. including the rsync one?
FULLLOGS=${FULLLOGS:-"false"}
# How many logfiles to keep
LOGROTATE=${LOGROTATE:-14}
# Our lockfile
LOCK=${LOCK:-"${TO}/Archive-Update-in-Progress-${MIRRORNAME}"}
# timeout for the lockfile, in case we have bash older than v4 (and no /proc)
LOCKTIMEOUT=${LOCKTIMEOUT:-3600}
# Do we need another rsync run?
UPDATEREQUIRED="${TO}/Archive-Update-Required-${MIRRORNAME}"
# Trace file for mirror stats and checks (make sure we get full hostname)
TRACE=${TRACE:-"project/trace/${MIRRORNAME}"}
# rsync program
RSYNC=${RSYNC:-rsync}
# Rsync filter rules. Used to protect various files we always want to keep, even if we otherwise delete
# excluded files
RSYNC_FILTER=${RSYNC_FILTER:-"--filter=protect_Archive-Update-in-Progress-${MIRRORNAME} --filter=protect_${TRACE} --filter=protect_Archive-Update-Required-${MIRRORNAME}"}
# limit I/O bandwidth. Value is KBytes per second, unset or 0 is unlimited
RSYNC_BW=${RSYNC_BW:-0}
# Default rsync options for *every* rsync call
RSYNC_OPTIONS=${RSYNC_OPTIONS:-"-prltvHSB8192 --timeout 3600 --stats ${RSYNC_FILTER}"}
# Options we only use in the first pass, where we do not want packages/sources to fly in yet and don't want to delete files
RSYNC_OPTIONS1=${RSYNC_OPTIONS1:-"--exclude Packages* --exclude Sources* --exclude Release* --exclude InRelease --exclude ls-lR*"}
# Options for the second pass, where we do want everything, including deletion of old and now unused files
RSYNC_OPTIONS2=${RSYNC_OPTIONS2:-"--max-delete=40000 --delay-updates --delete --delete-after --delete-excluded"}
# Which rsync share to use on our upstream mirror?
RSYNC_PATH=${RSYNC_PATH:-"ftp"}
# Now add the bwlimit option. As default is 0 we always add it, rsync interprets
# 0 as unlimited, so this is safe.
RSYNC_OPTIONS="--bwlimit=${RSYNC_BW} ${RSYNC_OPTIONS}"
# Connect from mirror.csclub
RSYNC_OPTIONS="--address=129.97.134.71 ${RSYNC_OPTIONS}"
# We have no default host to sync from, but will error out if its unset
RSYNC_HOST=${RSYNC_HOST:-""}
# Error out if we have no host to sync from
if [ -z "${RSYNC_HOST}" ]; then
error "Missing a host to mirror from, please set RSYNC_HOST variable in ${BASEDIR}/etc/${NAME}.conf"
fi
# our username for the rsync share
RSYNC_USER=${RSYNC_USER:-""}
# the password
RSYNC_PASSWORD=${RSYNC_PASSWORD:-""}
# a possible proxy
RSYNC_PROXY=${RSYNC_PROXY:-""}
# Do we sync stage1?
SYNCSTAGE1=${SYNCSTAGE1:-"false"}
# Do we sync stage2?
SYNCSTAGE2=${SYNCSTAGE2:-"false"}
# Do we sync all?
SYNCALL=${SYNCALL:-"true"}
# Do we have a mhop sync?
SYNCMHOP=${SYNCMHOP:-"false"}
# Do we callback?
SYNCCALLBACK=${SYNCCALLBACK:-"false"}
# If we call back we need some more options defined in the config file.
CALLBACKUSER=${CALLBACKUSER:-"archvsync"}
CALLBACKHOST=${CALLBACKHOST:-"none"}
CALLBACKKEY=${CALLBACKKEY:-"none"}
# General excludes. Don't list architecture specific stuff here, use ARCH_EXCLUDE for that!
EXCLUDE=${EXCLUDE:-""}
# The temp directory used by rsync --delay-updates is not
# world-readable remotely. Always exclude it to avoid errors.
EXCLUDE="${EXCLUDE} --exclude .~tmp~/"
SOURCE_EXCLUDE=${SOURCE_EXCLUDE:-""}
ARCH_EXCLUDE=${ARCH_EXCLUDE:-""}
# Exclude architectures defined in $ARCH_EXCLUDE
for ARCH in ${ARCH_EXCLUDE}; do
EXCLUDE="${EXCLUDE} --exclude binary-${ARCH}/ --exclude installer-${ARCH}/ --exclude Contents-${ARCH}.gz --exclude Contents-${ARCH}.bz2 --exclude Contents-${ARCH}.diff/ --exclude arch-${ARCH}.files --exclude arch-${ARCH}.list.gz --exclude *_${ARCH}.deb --exclude *_${ARCH}.udeb --exclude *_${ARCH}.changes"
if [ "${ARCH}" = "source" ]; then
if [ -z ${SOURCE_EXCLUDE} ]; then
SOURCE_EXCLUDE=" --exclude source/ --exclude *.tar.gz --exclude *.diff.gz --exclude *.tar.bz2 --exclude *.diff.bz2 --exclude *.dsc "
fi
fi
done
# Hooks
HOOK1=${HOOK1:-""}
HOOK2=${HOOK2:-""}
HOOK3=${HOOK3:-""}
HOOK4=${HOOK4:-""}
HOOK5=${HOOK5:-""}
# Are we a hub?
HUB=${HUB:-"false"}
########################################################################
# Really nothing to see below here. Only code follows. #
########################################################################
########################################################################
# Some sane defaults
cd "${BASEDIR}"
umask 022
# If we are here for the first time, create the
# destination and the trace directory
mkdir -p "${TO}/project/trace"
# Used to make sure we will have the archive fully and completly synced before
# we stop, even if we get multiple pushes while this script is running.
# Otherwise we can end up with a half-synced archive:
# - get a push
# - sync, while locked
# - get another push. Of course no extra sync run then happens, we are locked.
# - done. Archive not correctly synced, we don't have all the changes from the second push.
touch "${UPDATEREQUIRED}"
# Check to see if another sync is in progress
if ! ( set -o noclobber; echo "$$" > "${LOCK}") 2> /dev/null; then
if [ ${BASH_VERSINFO[0]} -gt 3 ] || [ -L /proc/self ]; then
# We have a recent enough bash version, lets do it the easy way,
# the lock will contain the right pid, thanks to $BASHPID
if ! $(kill -0 $(cat ${LOCK}) 2>/dev/null); then
# Process does either not exist or is not owned by us.
echo "$$" > "${LOCK}"
else
echo "Unable to start rsync, lock file still exists, PID $(cat ${LOCK})"
exit 1
fi
else
# Old bash, means we dont have the right pid in our lockfile
# So take a different way - guess if it is still there by comparing its age.
# Not optimal, but hey.
stamptime=$(date --reference="${LOCK}" +%s)
unixtime=$(date +%s)
difference=$(( $unixtime - $stamptime ))
if [ ${difference} -ge ${LOCKTIMEOUT} ]; then
# Took longer than LOCKTIMEOUT minutes? Assume it broke and take the lock
echo "$$" > "${LOCK}"
else
echo "Unable to start rsync, lock file younger than one hour"
exit 1
fi
fi
fi
# When we exit normally we call cleanup on our own. Otherwise we want it called by
# this trap. (We can not trap on EXIT, because that is called when the main script
# exits. Which also happens when we background the mainroutine, ie. while we still
# run!)
trap cleanup ERR TERM HUP INT QUIT
# Start log by redirecting stdout and stderr there and closing stdin
exec >"$LOG" 2>&1 <&-
log "Mirrorsync start"
# Look who pushed us and note that in the log.
PUSHFROM="${SSH_CONNECTION%%\ *}"
if [ -n "${PUSHFROM}" ]; then
log "We got pushed from ${PUSHFROM}"
fi
if [ "xtruex" = "x${SYNCCALLBACK}x" ]; then
if [ "xnonex" = "x${CALLBACKHOST}x" ] || [ "xnonex" = "x${CALLBACKKEY}x" ]; then
SYNCCALLBACK="false"
error "We are asked to call back, but we do not know where to and do not have a key, ignoring callback"
fi
fi
HOOK=(
HOOKNR=1
HOOKSCR=${HOOK1}
)
hook $HOOK
# Now, we might want to sync from anonymous too.
# This is that deep in this script so hook1 could, if wanted, change things!
if [ -z ${RSYNC_USER} ]; then
RSYNCPTH="${RSYNC_HOST}"
else
RSYNCPTH="${RSYNC_USER}@${RSYNC_HOST}"
fi
# Now do the actual mirroring, and run as long as we have an updaterequired file.
export RSYNC_PASSWORD
export RSYNC_PROXY
while [ -e "${UPDATEREQUIRED}" ]; do
log "Running mirrorsync, update is required, ${UPDATEREQUIRED} exists"
# if we want stage1 *or* all
if [ "xtruex" = "x${SYNCSTAGE1}x" ] || [ "xtruex" = "x${SYNCALL}x" ]; then
while [ -e "${UPDATEREQUIRED}" ]; do
rm -f "${UPDATEREQUIRED}"
log "Running stage1: ${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS1} ${EXCLUDE} ${SOURCE_EXCLUDE} ${RSYNCPTH}::${RSYNC_PATH} ${TO}"
set +e
# Step one, sync everything except Packages/Releases
${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS1} ${EXCLUDE} ${SOURCE_EXCLUDE} \
${RSYNCPTH}::${RSYNC_PATH} "${TO}" >"${LOGDIR}/rsync-${NAME}.log" 2>"${LOGDIR}/rsync-${NAME}.error"
result=$?
set -e
log "Back from rsync with returncode ${result}"
done
else
# Fake a good resultcode
result=0
fi # Sync stage 1?
rm -f "${UPDATEREQUIRED}"
set +e
check_rsync $result "Sync step 1 went wrong, got errorcode ${result}. Logfile: ${LOG}"
GO=$?
set -e
if [ ${GO} -eq 2 ] && [ -e "${UPDATEREQUIRED}" ]; then
log "We got error ${result} from rsync, but a second push went in hence ignoring this error for now"
elif [ ${GO} -ne 0 ]; then
exit 3
fi
HOOK=(
HOOKNR=2
HOOKSCR=${HOOK2}
)
hook $HOOK
# if we want stage2 *or* all
if [ "xtruex" = "x${SYNCSTAGE2}x" ] || [ "xtruex" = "x${SYNCALL}x" ]; then
log "Running stage2: ${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS2} ${EXCLUDE} ${SOURCE_EXCLUDE} ${RSYNCPTH}::${RSYNC_PATH} ${TO}"
set +e
# We are lucky, it worked. Now do step 2 and sync again, this time including
# the packages/releases files
${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS2} ${EXCLUDE} ${SOURCE_EXCLUDE} \
${RSYNCPTH}::${RSYNC_PATH} "${TO}" >>"${LOGDIR}/rsync-${NAME}.log" 2>>"${LOGDIR}/rsync-${NAME}.error"
result=$?
set -e
log "Back from rsync with returncode ${result}"
else
# Fake a good resultcode
result=0
fi # Sync stage 2?
set +e
check_rsync $result "Sync step 2 went wrong, got errorcode ${result}. Logfile: ${LOG}"
GO=$?
set -e
if [ ${GO} -eq 2 ] && [ -e "${UPDATEREQUIRED}" ]; then
log "We got error ${result} from rsync, but a second push went in hence ignoring this error for now"
elif [ ${GO} -ne 0 ]; then
exit 4
fi
HOOK=(
HOOKNR=3
HOOKSCR=${HOOK3}
)
hook $HOOK
done
# We only update our tracefile when we had a stage2 or an all sync.
# Otherwise we would update it after stage1 already, which is wrong.
if [ "xtruex" = "x${SYNCSTAGE2}x" ] || [ "xtruex" = "x${SYNCALL}x" ]; then
if [ -d "$(dirname "${TO}/${TRACE}")" ]; then
LC_ALL=POSIX LANG=POSIX date -u > "${TO}/${TRACE}"
echo "Used ftpsync version: ${VERSION}" >> "${TO}/${TRACE}"
echo "Running on host: $(hostname -f)" >> "${TO}/${TRACE}"
fi
fi
HOOK=(
HOOKNR=4
HOOKSCR=${HOOK4}
)
hook $HOOK
if [ "xtruex" = "x${SYNCCALLBACK}x" ]; then
set +e
callback ${CALLBACKUSER} ${CALLBACKHOST} "${CALLBACKKEY}"
set -e
fi
# Remove the Archive-Update-in-Progress file before we push our downstreams.
rm -f "${LOCK}"
if [ x${HUB} = "xtrue" ]; then
# Trigger slave mirrors if we had a push for stage2 or all, or if its mhop
if [ "xtruex" = "x${SYNCSTAGE2}x" ] || [ "xtruex" = "x${SYNCALL}x" ] || [ "xtruex" = "x${SYNCMHOP}x" ]; then
RUNMIRRORARGS=""
if [ -n "${ARCHIVE}" ]; then
# We tell runmirrors about the archive we are running on.
RUNMIRRORARGS="-a ${ARCHIVE}"
fi
# We also tell runmirrors that we are running it from within ftpsync, so it can change
# the way it works with mhop based on that.
RUNMIRRORARGS="${RUNMIRRORARGS} -f"
if [ "xtruex" = "x${SYNCSTAGE1}x" ]; then
# This is true when we have a mhop sync. A normal multi-stage push sending stage1 will
# not get to this point.
# So if that happens, tell runmirrors we are doing mhop
RUNMIRRORARGS="${RUNMIRRORARGS} -k mhop"
elif [ "xtruex" = "x${SYNCSTAGE2}x" ]; then
RUNMIRRORARGS="${RUNMIRRORARGS} -k stage2"
elif [ "xtruex" = "x${SYNCALL}x" ]; then
RUNMIRRORARGS="${RUNMIRRORARGS} -k all"
fi
log "Trigger slave mirrors using ${RUNMIRRORARGS}"
${BASEDIR}/bin/runmirrors ${RUNMIRRORARGS}
log "Trigger slave done"
HOOK=(
HOOKNR=5
HOOKSCR=${HOOK5}
)
hook $HOOK
fi
fi
# All done, lets call cleanup
cleanup

112
debian/bin/pushpdo vendored Executable file
View File

@ -0,0 +1,112 @@
#! /bin/bash
set -e
set -u
# psuhpdo script for Debian
#
# Copyright (C) 2008 Joerg Jaspert <joerg@debian.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; version 2.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
# In case the admin somehow wants to have this script located someplace else,
# he can set BASEDIR, and we will take that. If it is unset we take ${HOME}
BASEDIR=${BASEDIR:-"${HOME}"}
NAME="`basename $0`"
# Read our config file
. "${BASEDIR}/etc/${NAME}.conf"
# Source our common functions
. "${BASEDIR}/etc/common"
# Set sane defaults if the configfile didn't do that for us.
# The directory for our logfiles
LOGDIR=${LOGDIR:-"${BASEDIR}/log"}
# Our own logfile
LOG=${LOG:-"${LOGDIR}/${NAME}.log"}
# How many logfiles to keep
LOGROTATE=${LOGROTATE:-14}
# Our mirrorfile
MIRRORS=${MIRRORS:-"${BASEDIR}/etc/${NAME}.mirror"}
# used by log()
PROGRAM=${PROGRAM:-"${NAME}-$(hostname -s)"}
# extra ssh options we might want hostwide
SSH_OPTS=${SSH_OPTS:-""}
# Which ssh key to use?
KEYFILE=${KEYFILE:-".ssh/pushpackages"}
# which path to "mirror"
MIRRORPATH=${MIRRORPATH:-"/org/packages.debian.org/mirror/"}
# where to send mails to
if [ "x$(hostname -s)x" != "x${MIRRORNAME%%.debian.org}x" ]; then
# We are not on a debian.org host
MAILTO=${MAILTO:-"root"}
else
# Yay, on a .debian.org host
MAILTO=${MAILTO:-"mirrorlogs@debian.org"}
fi
if ! [ -f "${BASEDIR}/${KEYFILE}" ]; then
error "SSH Key ${BASEDIR}/${KEYFILE} does not exist" >> ${LOG}
exit 5
fi
# Some sane defaults
cd ${BASEDIR}
umask 022
# Make sure we have our log and lock directories
mkdir -p "${LOGDIR}"
trap 'log "Pdopush done" >> ${LOG}; savelog "${LOG}" > /dev/null' EXIT
log "Pushing pdo mirrors" >> ${LOG}
# From here on we do *NOT* want to exit on errors. We don't want to
# stop pushing mirrors just because we can't reach one of them.
set +e
# Now read our mirrorfile and push the mirrors defined in there.
# We use grep to easily sort out all lines having a # in front of them or are empty.
egrep -v '^[[:space:]]*(#|$)' "${MIRRORS}" |
while read MLNAME MHOSTNAME MUSER MPROTO MKEYFILE; do
# Process the two options that can be left blank in the config
if [ -z ${MPROTO} ]; then
MPROTO=2
fi
if [ -z ${MKEYFILE} ]; then
MKEYFILE="${BASEDIR}/${KEYFILE}"
fi
# Now, people can do stupid things and leave out the protocol, but
# define a keyfile...
if [ ${MPROTO} -ne 1 ] && [ ${MPROTO} -ne 2 ]; then
error "Need a correct ssh protocol version for ${MLNAME}, skipping" >> ${LOG}
continue
fi
# And finally, push the mirror
log "Pushing ${MLNAME}" >> ${LOG}
# This needs a limited ssh key on the other side, something like
# no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command="rsync --server -vlogDtpr . /srv/mirrors/packages.debian.org/",from="87.106.64.223,2001:8d8:80:11::35d,powell.debian.org" ssh-rsa.....
rsync -e "ssh -i ${MKEYFILE} -${MPROTO} ${SSH_OPTS}" -av --stats "${MIRRORPATH}" ${MUSER}@${MHOSTNAME}:/does/not/matter >"${LOGDIR}/${MLNAME}.log"
log "Pushing ${MLNAME} done" >> ${LOG}
savelog ${LOGDIR}${MLNAME}.log
set +e
done
exit 0

286
debian/bin/runmirrors vendored Executable file
View File

@ -0,0 +1,286 @@
#! /bin/bash
set -e
set -u
# runmirrors script for Debian
# Based losely on existing scripts, written by an unknown number of
# different people over the years.
#
# Copyright (C) 2008, 2009 Joerg Jaspert <joerg@debian.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; version 2.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
# In case the admin somehow wants to have this script located someplace else,
# he can set BASEDIR, and we will take that. If it is unset we take ${HOME}
BASEDIR=${BASEDIR:-"${HOME}"}
NAME="$(basename $0)"
HELP="$0, (C) 2008, 2009 by Joerg Jaspert <joerg@debian.org>\n
Usage:\n\n
1.) a single parameter with NO leading -.\n
\t This will will then be used as the addition for our configfile. Ie. \`$0 security\` will\n
\t have us look for ${NAME}-security.{conf,mirror} files.\n\n
2.) using getopt style parameters:\n
\t -a [NAME] - Same as 1.) above, used for the config files. Default empty.\n
\t -k [TYPE] - Type of push. all, stage2, mhop. Default mhop.\n
\t -f - Run from within the mirrorscript ftpsync. Don't use from commandline!\n
\t -h - Print this help and exit
"
# If we got options, lets see if we use newstyle options, or oldstyle. If oldstyle
# it will not start with a -. If we find oldstyle we assume its only one, the config
# name we run on.
if [ $# -gt 0 ]; then
if [ "x${1:0:1}x" != "x-x" ]; then
# Yes, does not start with a -, so use it for the config name.
CONF=${1:-""}
if [ -n "${CONF}" ]; then
NAME="${NAME}-${CONF}"
fi
else
# Yeah well, new style, starting with - for getopts
while getopts ':a:k:fh' OPTION ; do
case $OPTION in
a) CONF="${OPTARG}"
if [ -n "${CONF}" ]; then
NAME="${NAME}-${CONF}"
fi
;;
k) PUSHKIND="${OPTARG}"
;;
f) FROMFTPSYNC="true"
;;
h) echo -e $HELP
exit 0
;;
*) echo "Invalid usage"
echo -e $HELP
exit 1
;;
esac
done
fi
fi
# Make sure the values are always defined, even if there was no commandline option
# for them
# Default config is empty
CONF=${CONF:-""}
# Set the default to all, if we didnt get told about it. Currently
# valid: all - normal push. mhop - multi-hop multi-stage push, this is stage1,
# stage2 - staged push, second phase. Default is mhop.
PUSHKIND=${PUSHKIND:-"mhop"}
# If we are pushed from within ftpsync. Default false.
FROMFTPSYNC=${FROMFTPSYNC:-"false"}
########################################################################
# Read our config file
. "${BASEDIR}/etc/${NAME}.conf"
# Source our common functions
. "${BASEDIR}/etc/common"
# Set sane defaults if the configfile didn't do that for us.
# The directory for our logfiles
LOGDIR=${LOGDIR:-"${BASEDIR}/log"}
# Our own logfile
LOG=${LOG:-"${LOGDIR}/${NAME}.log"}
# Our lockfile directory
LOCKDIR=${LOCKDIR:-"${BASEDIR}/locks"}
# How many logfiles to keep
LOGROTATE=${LOGROTATE:-14}
# Our mirrorfile
MIRRORS=${MIRRORS:-"${BASEDIR}/etc/${NAME}.mirror"}
# used by log()
PROGRAM=${PROGRAM:-"${NAME}-$(hostname -s)"}
# extra ssh options we might want hostwide
SSH_OPTS=${SSH_OPTS:-"-o StrictHostKeyChecking=no"}
# Whats our archive name? We will also tell our leafs about it
PUSHARCHIVE=${PUSHARCHIVE:-"${CONF}"}
# How long to wait for mirrors to do stage1 if we have multi-stage syncing
PUSHDELAY=${PUSHDELAY:-600}
# Which ssh key to use?
KEYFILE=${KEYFILE:-".ssh/pushmirror"}
# where to send mails to
if [ "x$(hostname -d)x" != "xdebian.orgx" ]; then
# We are not on a debian.org host
MAILTO=${MAILTO:-"root"}
else
# Yay, on a .debian.org host
MAILTO=${MAILTO:-"mirrorlogs@debian.org"}
fi
if ! [ -f "${BASEDIR}/${KEYFILE}" ]; then
error "SSH Key ${BASEDIR}/${KEYFILE} does not exist" >> "${LOG}"
exit 5
fi
# Hooks
HOOK1=${HOOK1:-""}
HOOK2=${HOOK2:-""}
HOOK3=${HOOK3:-""}
########################################################################
# Some sane defaults
cd "${BASEDIR}"
umask 022
# Make sure we have our log and lock directories
mkdir -p "${LOGDIR}"
mkdir -p "${LOCKDIR}"
trap 'log "Mirrorpush done" >> "${LOG}"; savelog "${LOG}" > /dev/null' EXIT
log "Pushing leaf mirrors. Inside ftpsync: ${FROMFTPSYNC}. Pushkind: ${PUSHKIND}" >> "${LOG}"
HOOK=(
HOOKNR=1
HOOKSCR=${HOOK1}
)
hook $HOOK
# From here on we do *NOT* want to exit on errors. We don't want to
# stop pushing mirrors just because we can't reach one of them.
set +e
# Built up our list of 2-stage mirrors.
PUSHLOCKS=""
PUSHLOCKS=$(get2stage)
# In case we have it - remove. It is used to synchronize multi-stage mirroring
rm -f "${LOCKDIR}/all_stage1"
# Now read our mirrorfile and push the mirrors defined in there.
# We use grep to easily sort out all lines having a # in front of them or are empty.
egrep -v '^[[:space:]]*(#|$)' "${MIRRORS}" |
while read MTYPE MLNAME MHOSTNAME MUSER MSSHOPT; do
if [ "x${MTYPE}x" = "xDELAYx" ]; then
# We should wait a bit.
if [ -z ${MLNAME} ]; then
MLNAME=600
fi
log "Delay of ${MLNAME} requested, sleeping" >> "${LOG}"
sleep ${MLNAME}
continue
fi
# If we are told we have a mhop sync to do and are called from within ftpsync,
# we will only look at staged/mhop entries and ignore the rest.
if [ "x${PUSHKIND}x" = "xmhopx" ] && [ "x${FROMFTPSYNC}x" = "xtruex" ]; then
if [ "x${MTYPE}x" != "xstagedx" ] && [ "x${MTYPE}x" != "xmhopx" ]; then
continue
fi
fi
# Now, MSSHOPT may start with a -. In that case the whole rest of the line is taken
# as a set of options to give to ssh, we pass it without doing anything with it.
# If it starts with a 1 or 2 then it will tell us about the ssh protocol version to use,
# and also means we look if there is one value more after a space. That value would then
# be the ssh keyfile we use with -i. That gives us full flexibility for all
# ssh options but doesn't destroy backwards compatibility.
# If it is empty we assume proto 2 and the default keyfile.
#
# There is one bug in here. We will give out the master keyfile, even if there is a
# "-i /bla/bla" in the options. ssh stuffs them together and presents two keys to the
# target server. In the case both keys do some actions- the first one presented wins.
# And this might not be what one wants.
#
# The only sane way to go around this, i think, is by dropping backward compability.
# Which I don't really like.
if [ -n "${MSSHOPT}" ]; then
# So its not empty, lets check if it starts with a - and as such is a "new-style"
# ssh options set.
if [ "x${MSSHOPT:0:1}x" = "x-x" ]; then
# Yes we start with a -
SSHOPT="${MSSHOPT}"
MPROTO="99"
MKEYFILE="${BASEDIR}/${KEYFILE}"
elif [ ${MSSHOPT:0:1} -eq 1 ] || [ ${MSSHOPT:0:1} -eq 2 ]; then
# We do seem to have oldstyle options here.
MPROTO=${MSSHOPT:0:1}
MKEYFILE=${MSSHOPT:1}
SSHOPT=""
else
error "I don't know what is configured for mirror ${MLNAME}"
continue
fi
else
MPROTO=2
MKEYFILE="${BASEDIR}/${KEYFILE}"
SSHOPT=""
fi
# Built our array
SIGNAL_OPTS=(
MIRROR="${MLNAME}"
HOSTNAME="${MHOSTNAME}"
USERNAME="${MUSER}"
SSHPROTO="${MPROTO}"
SSHKEY="${MKEYFILE}"
SSHOPTS="${SSHOPT/ /#}"
PUSHLOCKOWN="${LOCKDIR}/${MLNAME}.stage1"
PUSHTYPE="${MTYPE}"
PUSHARCHIVE=${PUSHARCHIVE}
PUSHKIND=${PUSHKIND}
FROMFTPSYNC=${FROMFTPSYNC}
)
# And finally, push the mirror
log "Trigger ${MLNAME}" >> "${LOG}"
signal "${SIGNAL_OPTS}" &
log "Trigger for ${MLNAME} done" >> "${LOG}"
HOOK=(
HOOKNR=2
HOOKSCR=${HOOK2}
)
hook $HOOK
set +e
done
# If we are run from within ftpsync *and* have an mhop push to send on, we have
# to wait until the push is gone through and they all returned, or we will exit
# much too early.
# As the signal routine touches $LOCKDIR/all_stage1 when all are done, its
# easy enough just to wait for that to appear. Of course we do the same game
# with PUSHDELAY to not wait forever.
if [ "xtruex" = "x${FROMFTPSYNC}x" ] && [ "xmhopx" = "x${PUSHKIND}x" ]; then
tries=0
# We do not wait forever
while [ ${tries} -lt ${PUSHDELAY} ]; do
if [ -f "${LOCKDIR}/all_stage1" ]; then
break
fi
tries=$((tries + 5))
sleep 5
done
if [ ${tries} -ge ${PUSHDELAY} ]; then
error "Failed to wait for our mirrors when sending mhop push down." >> "${LOG}"
fi
fi
HOOK=(
HOOKNR=3
HOOKSCR=${HOOK3}
)
hook $HOOK
exit 0

168
debian/bin/typicalsync vendored Executable file
View File

@ -0,0 +1,168 @@
#!/usr/bin/perl -wT
# Copyright (c) 2006 Anthony Towns <ajt@debian.org>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
use strict;
use Fcntl ':flock';
use File::Find;
use POSIX qw(strftime);
# configuration:
my $local_dir = "/srv/ftp.debian.org/mirror";
my $rsync_host = undef; #"merkel.debian.org";
my $rsync_dir = undef; #"debian";
my $dest = "/srv/ftp.debian.org/rsync/typical";
my $max_del = 1000;
$ENV{"PATH"} = "/bin:/usr/bin";
# program
my $hostname = `/bin/hostname -f`;
die "bad hostname" unless $hostname =~ m/^([a-zA-Z0-9._-]+)/;
$hostname = $1;
my $lockfile = "./Archive-Update-in-Progress-$hostname";
unless (open LKFILE, "> $dest/$lockfile" and flock(LKFILE, LOCK_EX)) {
print "$hostname is unable to start sync, lock file exists\n";
exit(1);
}
if (defined $rsync_host && defined $rsync_dir) {
system("rsync --links --hard-links --times --verbose --recursive"
." --delay-updates --files-from :indices/files/typical.files"
." rsync://$rsync_host/$rsync_dir/ $dest/");
} else {
open FILELIST, "< $local_dir/indices/files/typical.files"
or die "typical.files index not found";
while (<FILELIST>) {
chomp;
m/^(.*)$/; $_ = $1;
my @l = lstat("$local_dir/$_");
next unless (@l);
if (-l _) {
my $lpath = readlink("$local_dir/$_");
$lpath =~ m/^(.*)$/; $lpath = $1;
if (-l "$dest/$_") {
next if ($lpath eq readlink("$dest/$_"));
}
unless (mk_dirname_as_dirs($dest, $_)) {
print "E: couldn't create path for $_\n";
next;
}
if (-d "$dest/$_") {
rename "$dest/$_", "$dest/$_.remove" or print "E: couldn't rename old dir $_ out of the way\n";
} elsif (-e "$dest/$_") {
unlink("$dest/$_") or print "E: couldn't unlink $_\n";
}
symlink($lpath, "$dest/$_") or print "E: couldn't create $_ as symlink to $lpath\n";
next;
}
next if (-d _);
unless (mk_dirname_as_dirs($dest, $_)) {
print "E: couldn't create path for $_\n";
next;
}
my @d = lstat("$dest/$_");
if (@d) {
if (-d _) {
rename("$dest/$_", "$dest/$_.remove") or print "E: couldn't rename old dir $_ out of the way\n";
} else {
next if (@l and @d and $l[0] == $d[0] and $l[1] == $d[1]);
#next if (@l and @d and $l[7] == $d[7]);
print "I: updating $_\n";
unlink("$dest/$_");
}
}
link("$local_dir/$_", "$dest/$_") or print "E: couldn't link $_\n";
}
close(FILELIST);
}
print "Files synced, now deleting any unnecessary files\n";
my %expected_files = ();
open FILES, "< $dest/indices/files/typical.files"
or die "typical.files index not found";
while (<FILES>) {
chomp;
$expected_files{$_} = 1;
}
close(FILES);
chdir($dest);
my $del_count = 0;
my $last = '';
finddepth({wanted => \&wanted, no_chdir => 1}, ".");
open TRACE, "> $dest/project/trace/$hostname" or die "couldn't open trace";
print TRACE strftime("%a %b %e %H:%M:%S UTC %Y", gmtime) . "\n";
close TRACE;
close LKFILE;
unlink("$dest/$lockfile");
exit(0);
sub wanted {
my ($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_);
if (-d _) {
if (substr($last, 0, length($_) + 1) ne "$_/") {
print "Deleting empty directory: $_\n";
$_ = m/^(.*)$/;
my $f = $1;
rmdir($f);
} else {
$last = $_;
}
} elsif ($_ =~ m|^\./project/trace/| or $_ eq $lockfile) {
$last = $_;
} elsif (defined $expected_files{$_}) {
$last = $_;
} elsif ($del_count < $max_del) {
$del_count++;
print "Deleting file: $_\n";
$_ = m/^(.*)$/;
my $f = $1;
unlink($f);
}
}
sub mk_dirname_as_dirs {
my ($base, $file) = @_;
while ($file =~ m,^/*([^/]+)/+([^/].*)$,) {
$file = $2;
$base = "$base/$1";
my @blah = lstat($base);
if (!@blah) {
mkdir($base, 0777);
} elsif (-l _ or ! -d _) {
print "SHOULD BE A DIRECTORY: $base\n";
unlink($base);
mkdir($base, 0777);
}
}
1;
}

13
debian/bin/udh vendored Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
set -e
unset LC_CTYPE
LANG=C
HOST=`hostname -f`
cd ${HOME}/archvsync
git pull
cd ${HOME}
~/archvsync/bin/dircombine . . archvsync/ >/dev/null 2>&1

304
debian/bin/websync vendored Executable file
View File

@ -0,0 +1,304 @@
#! /bin/bash
# No, we can not deal with sh alone.
set -e
set -u
# ERR traps should be inherited from functions too. (And command
# substitutions and subshells and whatnot, but for us the function is
# the important part here)
set -E
# websync script for Debian
# Based losely on the old websync written by an
# unknown number of different people over the years and ftpsync.
#
# Copyright (C) 2008,2009 Joerg Jaspert <joerg@debian.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; version 2.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
# In case the admin somehow wants to have this script located someplace else,
# he can set BASEDIR, and we will take that. If it is unset we take ${HOME}
# How the admin sets this isn't our place to deal with. One could use a wrapper
# for that. Or pam_env. Or whatever fits in the local setup. :)
BASEDIR=${BASEDIR:-"${HOME}"}
# Script version. DO NOT CHANGE, *unless* you change the master copy maintained
# by Joerg Jaspert and the Debian mirroradm group.
# This is used to track which mirror is using which script version.
VERSION="0815"
# Source our common functions
. "${BASEDIR}/etc/common"
########################################################################
########################################################################
## functions ##
########################################################################
########################################################################
# All the stuff we want to do when we exit, no matter where
cleanup() {
trap - ERR TERM HUP INT QUIT EXIT
# all done. Mail the log, exit.
log "Mirrorsync done";
if [ -n "${MAILTO}" ]; then
# In case rsync had something on stderr
if [ -s "${LOGDIR}/rsync-${NAME}.error" ]; then
mail -e -s "[${PROGRAM}@$(hostname -s)] ($$) rsync ERROR on $(date +"%Y.%m.%d-%H:%M:%S")" ${MAILTO} < "${LOGDIR}/rsync-${NAME}.error"
fi
if [ "x${ERRORSONLY}x" = "xfalsex" ]; then
# And the normal log
MAILFILES="${LOG}"
if [ "x${FULLLOGS}x" = "xtruex" ]; then
# Someone wants full logs including rsync
MAILFILES="${MAILFILES} ${LOGDIR}/rsync-${NAME}.log"
fi
cat ${MAILFILES} | mail -e -s "[${PROGRAM}@$(hostname -s)] web sync finished on $(date +"%Y.%m.%d-%H:%M:%S")" ${MAILTO}
fi
fi
savelog "${LOGDIR}/rsync-${NAME}.log"
savelog "${LOGDIR}/rsync-${NAME}.error"
savelog "$LOG" > /dev/null
rm -f "${LOCK}"
}
# Check rsyncs return value
check_rsync() {
ret=$1
msg=$2
# 24 - vanished source files. Ignored, that should be the target of $UPDATEREQUIRED
# and us re-running. If it's not, uplink is broken anyways.
case "${ret}" in
0) return 0;;
24) return 0;;
23) return 2;;
30) return 2;;
*)
error "ERROR: ${msg}"
return 1
;;
esac
}
########################################################################
########################################################################
# As what are we called?
NAME="`basename $0`"
# Now source the config.
. "${BASEDIR}/etc/${NAME}.conf"
########################################################################
# Config options go here. Feel free to overwrite them in the config #
# file if you need to. #
# On debian.org machines the defaults should be ok. #
########################################################################
########################################################################
# There should be nothing to edit here, use the config file #
########################################################################
MIRRORNAME=${MIRRORNAME:-`hostname -f`}
# Where to put logfiles in
LOGDIR=${LOGDIR:-"${BASEDIR}/log"}
# Our own logfile
LOG=${LOG:-"${LOGDIR}/${NAME}.log"}
# Where should we put all the mirrored files?
TO=${TO:-"/org/www.debian.org/www"}
# used by log() and error()
PROGRAM=${PROGRAM:-"${NAME}-$(hostname -s)"}
# Where to send mails about mirroring to?
if [ "x$(hostname -d)x" != "xdebian.orgx" ]; then
# We are not on a debian.org host
MAILTO=${MAILTO:-"root"}
else
# Yay, on a .debian.org host
MAILTO=${MAILTO:-"mirrorlogs@debian.org"}
fi
# Want errors only or every log?
ERRORSONLY=${ERRORSONLY:-"true"}
# Want full logs, ie. including the rsync one?
FULLLOGS=${FULLLOGS:-"false"}
# How many logfiles to keep
LOGROTATE=${LOGROTATE:-14}
# Our lockfile
LOCK=${LOCK:-"${TO}/Website-Update-in-Progress-${MIRRORNAME}"}
# Do we need another rsync run?
UPDATEREQUIRED="${TO}/Website-Update-Required-${MIRRORNAME}"
# Trace file for mirror stats and checks (make sure we get full hostname)
TRACE=${TRACE:-".project/trace/${MIRRORNAME}"}
# rsync program
RSYNC=${RSYNC:-rsync}
# Rsync filter rules. Used to protect various files we always want to keep, even if we otherwise delete
# excluded files
RSYNC_FILTER=${RSYNC_FILTER:-"--filter=protect_Website-Update-in-Progress-${MIRRORNAME} --filter=protect_${TRACE} --filter=protect_Website-Update-Required-${MIRRORNAME}"}
# Default rsync options for *every* rsync call
RSYNC_OPTIONS=${RSYNC_OPTIONS:-"-prltvHSB8192 --timeout 3600 --stats ${RSYNC_FILTER}"}
RSYNC_OPTIONS2=${RSYNC_OPTIONS2:-"--max-delete=40000 --delay-updates --delete --delete-after --delete-excluded"}
# Which rsync share to use on our upstream mirror?
RSYNC_PATH=${RSYNC_PATH:-"web.debian.org"}
# our username for the rsync share
RSYNC_USER=${RSYNC_USER:-""}
# the password
RSYNC_PASSWORD=${RSYNC_PASSWORD:-""}
# a possible proxy
RSYNC_PROXY=${RSYNC_PROXY:-""}
# General excludes.
EXCLUDE=${EXCLUDE:-"--exclude ${HOSTNAME}"}
# The temp directory used by rsync --delay-updates is not
# world-readable remotely. Always exclude it to avoid errors.
EXCLUDE="${EXCLUDE} --exclude .~tmp~/"
# And site specific excludes, by default its the sponsor stuff that should be local to all (except templates)
SITE_FILTER=${SITE_FILTER:-"--include sponsor.deb.* --exclude sponsor_img.* --exclude sponsor.html --exclude sponsor.*.html --filter=protect_sponsor_img.* --filter=protect_sponsor.html --filter=protect_sponsor.*.html"}
# Hooks
HOOK1=${HOOK1:-""}
HOOK2=${HOOK2:-""}
HOOK3=${HOOK3:-""}
HOOK4=${HOOK4:-""}
# Are we a hub?
HUB=${HUB:-"false"}
# Some sane defaults
cd "${BASEDIR}"
umask 022
# If we are here for the first time, create the
# destination and the trace directory
mkdir -p "${TO}/.project/trace"
# Used to make sure we will have the archive fully and completly synced before
# we stop, even if we get multiple pushes while this script is running.
# Otherwise we can end up with a half-synced archive:
# - get a push
# - sync, while locked
# - get another push. Of course no extra sync run then happens, we are locked.
# - done. Archive not correctly synced, we don't have all the changes from the second push.
touch "${UPDATEREQUIRED}"
# Check to see if another sync is in progress
if ! ( set -o noclobber; echo "$$" > "${LOCK}") 2> /dev/null; then
if ! $(kill -0 $(cat ${LOCK}) 2>/dev/null); then
# Process does either not exist or is not owned by us.
echo "$$" > "${LOCK}"
else
echo "Unable to start rsync, lock file still exists, PID $(cat ${LOCK})"
exit 1
fi
fi
trap cleanup EXIT ERR TERM HUP INT QUIT
# Start log by redirecting everything there.
exec >"$LOG" 2>&1 </dev/null
# Look who pushed us and note that in the log.
log "Mirrorsync start"
PUSHFROM="${SSH_CONNECTION%%\ *}"
if [ -n "${PUSHFROM}" ]; then
log "We got pushed from ${PUSHFROM}"
fi
log "Acquired main lock"
HOOK=(
HOOKNR=1
HOOKSCR=${HOOK1}
)
hook $HOOK
# Now, we might want to sync from anonymous too.
# This is that deep in this script so hook1 could, if wanted, change things!
if [ -z ${RSYNC_USER} ]; then
RSYNCPTH="${RSYNC_HOST}"
else
RSYNCPTH="${RSYNC_USER}@${RSYNC_HOST}"
fi
# Now do the actual mirroring, and run as long as we have an updaterequired file.
export RSYNC_PASSWORD
export RSYNC_PROXY
while [ -e "${UPDATEREQUIRED}" ]; do
log "Running mirrorsync, update is required, ${UPDATEREQUIRED} exists"
rm -f "${UPDATEREQUIRED}"
log "Syncing: ${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS2} ${EXCLUDE} ${SITE_FILTER} ${RSYNCPTH}::${RSYNC_PATH} ${TO}"
set +e
${RSYNC} ${RSYNC_OPTIONS} ${RSYNC_OPTIONS2} ${EXCLUDE} ${SITE_FILTER} \
${RSYNCPTH}::${RSYNC_PATH} "${TO}" >"${LOGDIR}/rsync-${NAME}.log" 2>"${LOGDIR}/rsync-${NAME}.error"
result=$?
set -e
log "Back from rsync with returncode ${result}"
set +e
check_rsync $result "Sync went wrong, got errorcode ${result}. Logfile: ${LOG}"
GO=$?
set -e
if [ ${GO} -eq 2 ] && [ -e "${UPDATEREQUIRED}" ]; then
log "We got error ${result} from rsync, but a second push went in hence ignoring this error for now"
elif [ ${GO} -ne 0 ]; then
exit 3
fi
HOOK=(
HOOKNR=2
HOOKSCR=${HOOK2}
)
hook $HOOK
done
mkdir -p "${TO}/.project/trace"
LC_ALL=POSIX LANG=POSIX date -u > "${TO}/${TRACE}"
echo "Used websync version: ${VERSION}" >> "${TO}/${TRACE}"
echo "Running on host: $(hostname -f)" >> "${TO}/${TRACE}"
HOOK=(
HOOKNR=3
HOOKSCR=${HOOK3}
)
hook $HOOK
if [ x${HUB} = "xtrue" ]; then
log "Trigger slave mirrors"
${BASEDIR}/bin/runmirrors "websync"
log "Trigger slave done"
HOOK=(
HOOKNR=4
HOOKSCR=${HOOK4}
)
hook $HOOK
fi
# All done, rest is done by cleanup hook.

230
debian/etc/common vendored Normal file
View File

@ -0,0 +1,230 @@
# -*- mode:sh -*-
# Little common functions
# push a mirror attached to us.
# Arguments (using an array named SIGNAL_OPTS):
#
# $MIRROR - Name for the mirror, also basename for the logfile
# $HOSTNAME - Hostname to push to
# $USERNAME - Username there
# $SSHPROTO - Protocol version, either 1 or 2.
# $SSHKEY - the ssh private key file to use for this push
# $SSHOPTS - any other option ssh accepts, passed blindly, be careful
# $PUSHLOCKOWN - own lockfile name to touch after stage1 in pushtype=staged
# $PUSHTYPE - what kind of push should be done?
# all - normal, just push once with ssh backgrounded and finish
# staged - staged. first push stage1, then wait for $PUSHLOCKs to appear,
# then push stage2
# $PUSHARCHIVE - what archive to sync? (Multiple mirrors behind one ssh key!)
# $PUSHCB - do we want a callback?
# $PUSHKIND - whats going on? are we doing mhop push or already stage2?
# $FROMFTPSYNC - set to true if we run from within ftpsync.
#
# This function assumes that the variable LOG is set to a directory where
# logfiles can be written to.
# Additionally $PUSHLOCKS has to be defined as a set of space delimited strings
# (list of "lock"files) to wait for if you want pushtype=staged
#
# Pushes might be done in background (for type all).
signal () {
ARGS="SIGNAL_OPTS[*]"
local ${!ARGS}
MIRROR=${MIRROR:-""}
HOSTNAME=${HOSTNAME:-""}
USERNAME=${USERNAME:-""}
SSHPROTO=${SSHPROTO:-""}
SSHKEY=${SSHKEY:-""}
SSHOPTS=${SSHOPTS:-""}
PUSHLOCKOWN=${PUSHLOCKOWN:-""}
PUSHTYPE=${PUSHTYPE:-"all"}
PUSHARCHIVE=${PUSHARCHIVE:-""}
PUSHCB=${PUSHCB:-""}
PUSHKIND=${PUSHKIND:-"all"}
FROMFTPSYNC=${FROMFTPSYNC:-"false"}
# And now get # back to space...
SSHOPTS=${SSHOPTS/\#/ }
# Defaults we always want, no matter what
SSH_OPTIONS="-o user=${USERNAME} -o BatchMode=yes -o ServerAliveInterval=45 -o ConnectTimeout=45 -o PasswordAuthentication=no"
# If there are userdefined ssh options, add them.
if [ -n "${SSH_OPTS}" ]; then
SSH_OPTIONS="${SSH_OPTIONS} ${SSH_OPTS}"
fi
# Does this machine need a special key?
if [ -n "${SSHKEY}" ]; then
SSH_OPTIONS="${SSH_OPTIONS} -i ${SSHKEY}"
fi
# Does this machine have an extra own set of ssh options?
if [ -n "${SSHOPTS}" ]; then
SSH_OPTIONS="${SSH_OPTIONS} ${SSHOPTS}"
fi
# Set the protocol version
if [ ${SSHPROTO} -ne 1 ] && [ ${SSHPROTO} -ne 2 ] && [ ${SSHPROTO} -ne 99 ]; then
# Idiots, we only want 1 or 2. Cant decide? Lets force 2.
SSHPROTO=2
fi
if [ -n "${SSHPROTO}" ] && [ ${SSHPROTO} -ne 99 ]; then
SSH_OPTIONS="${SSH_OPTIONS} -${SSHPROTO}"
fi
date -u >> "${LOGDIR}/${MIRROR}.log"
PUSHARGS=""
# PUSHARCHIVE empty or not, we always add the sync:archive: command to transfer.
# Otherwise, if nothing else is added, ssh -f would not work ("no command to execute")
# But ftpsync does treat "sync:archive:" as the main archive, so this works nicely.
PUSHARGS="${PUSHARGS} sync:archive:${PUSHARCHIVE}"
# We have a callback wish, tell downstreams
if [ -n "${PUSHCB}" ]; then
PUSHARGS="${PUSHARGS} sync:callback"
fi
# If we are running an mhop push AND our downstream is one to receive it, tell it.
if [ "xmhopx" = "x${PUSHKIND}x" ] && [ "xmhopx" = "x${PUSHTYPE}x" ]; then
PUSHARGS="${PUSHARGS} sync:mhop"
fi
if [ "xallx" = "x${PUSHTYPE}x" ]; then
# Default normal "fire and forget" push. We background that, we do not care about the mirrors doings
echo "Sending normal push" >> "${LOGDIR}/${MIRROR}.log"
PUSHARGS1="sync:all"
ssh -f $SSH_OPTIONS "${HOSTNAME}" "${PUSHARGS} ${PUSHARGS1}" >>"${LOGDIR}/${MIRROR}.log"
elif [ "xstagedx" = "x${PUSHTYPE}x" ] || [ "xmhopx" = "x${PUSHTYPE}x" ]; then
# Want a staged push. Fine, lets do that. Not backgrounded. We care about the mirrors doings.
echo "Sending staged push" >> "${LOGDIR}/${MIRROR}.log"
# Only send stage1 if we havent already send it. When called with stage2, we already did.
if [ "xstage2x" != "x${PUSHKIND}x" ]; then
# Step1: Do a push to only sync stage1, do not background
PUSHARGS1="sync:stage1"
ssh $SSH_OPTIONS "${HOSTNAME}" "${PUSHARGS} ${PUSHARGS1}" >>"${LOGDIR}/${MIRROR}.log" 2>&1
touch "${PUSHLOCKOWN}"
# Step2: Wait for all the other "lock"files to appear.
tries=0
# We do not wait forever
while [ ${tries} -lt ${PUSHDELAY} ]; do
total=0
found=0
for file in ${PUSHLOCKS}; do
total=$((total + 1))
if [ -f ${file} ]; then
found=$((found + 1))
fi
done
if [ ${total} -eq ${found} ] || [ -f "${LOCKDIR}/all_stage1" ]; then
touch "${LOCKDIR}/all_stage1"
break
fi
tries=$((tries + 5))
sleep 5
done
# In case we did not have all PUSHLOCKS and still continued, note it
# This is a little racy, especially if the other parts decide to do this
# at the same time, but it wont hurt more than a mail too much, so I don't care much
if [ ${tries} -ge ${PUSHDELAY} ]; then
echo "Failed to wait for all other mirrors. Failed ones are:" >> "${LOGDIR}/${MIRROR}.log"
for file in ${PUSHLOCKS}; do
if [ ! -f ${file} ]; then
echo "${file}" >> "${LOGDIR}/${MIRROR}.log"
error "Missing Pushlockfile ${file} after waiting ${tries} second, continuing"
fi
done
fi
rm -f "${PUSHLOCKOWN}"
fi
# Step3: It either timed out or we have all the "lock"files, do the rest
# If we are doing mhop AND are called from ftpsync - we now exit.
# That way we notify our uplink that we and all our clients are done with their
# stage1. It can then finish its own, and if all our upstreams downlinks are done,
# it will send us stage2.
# If we are not doing mhop or are not called from ftpsync, we start stage2
if [ "xtruex" = "x${FROMFTPSYNC}x" ] && [ "xmhopx" = "x${PUSHKIND}x" ]; then
return
else
PUSHARGS2="sync:stage2"
echo "Now doing the second stage push" >> "${LOGDIR}/${MIRROR}.log"
ssh $SSH_OPTIONS "${HOSTNAME}" "${PUSHARGS} ${PUSHARGS2}" >>"${LOGDIR}/${MIRROR}.log" 2>&1
fi
else
# Can't decide? Then you get nothing.
return
fi
}
# callback, used by ftpsync
callback () {
# Defaults we always want, no matter what
SSH_OPTIONS="-o BatchMode=yes -o ServerAliveInterval=45 -o ConnectTimeout=45 -o PasswordAuthentication=no"
ssh $SSH_OPTIONS -i "$3" -o"user $1" "$2" callback:${HOSTNAME}
}
# log something (basically echo it together with a timestamp)
#
# Set $PROGRAM to a string to have it added to the output.
log () {
if [ -z "${PROGRAM}" ]; then
echo "$(date +"%b %d %H:%M:%S") $(hostname -s) [$$] $@"
else
echo "$(date +"%b %d %H:%M:%S") $(hostname -s) ${PROGRAM}[$$]: $@"
fi
}
# log the message using log() but then also send a mail
# to the address configured in MAILTO (if non-empty)
error () {
log "$@"
if [ -z "${MAILTO}" ]; then
echo "$@" | mail -e -s "[$PROGRAM@$(hostname -s)] ERROR [$$]" ${MAILTO}
fi
}
# run a hook
# needs array variable HOOK setup with HOOKNR being a number an HOOKSCR
# the script to run.
hook () {
ARGS='HOOK[@]'
local "${!ARGS}"
if [ -n "${HOOKSCR}" ]; then
log "Running hook $HOOKNR: ${HOOKSCR}"
set +e
${HOOKSCR}
result=$?
set -e
log "Back from hook $HOOKNR, got returncode ${result}"
return $result
else
return 0
fi
}
# Return the list of 2-stage mirrors.
get2stage() {
egrep '^(staged|mhop)' "${MIRRORS}" | {
while read MTYPE MLNAME MHOSTNAME MUSER MPROTO MKEYFILE; do
PUSHLOCKS="${LOCKDIR}/${MLNAME}.stage1 ${PUSHLOCKS}"
done
echo "$PUSHLOCKS"
}
}
# Rotate logfiles
savelog() {
torotate="$1"
count=${2:-${LOGROTATE}}
while [ ${count} -gt 0 ]; do
prev=$(( count - 1 ))
if [ -e "${torotate}.${prev}" ]; then
mv "${torotate}.${prev}" "${torotate}.${count}"
fi
count=$prev
done
mv "${torotate}" "${torotate}.0"
}

137
debian/etc/ftpsync.conf vendored Normal file
View File

@ -0,0 +1,137 @@
## Mirrorname. This is used for things like the trace file and should always
## be the full hostname of the mirror.
MIRRORNAME=mirror.csclub.uwaterloo.ca
## Destination of the mirrored files. Should be an empty directory.
## CAREFUL, this directory will contain the mirror. Everything else
## that might have happened to be in there WILL BE GONE after the mirror sync!
TO="/mirror/root/debian"
## The upstream name of the rsync share.
RSYNC_PATH="debian"
## The host we mirror from
RSYNC_HOST="ftp.ca.debian.org"
## In case we need a user to access the rsync share at our upstream host
#RSYNC_USER=
## If we need a user we also need a password
#RSYNC_PASSWORD=
## In which directory should logfiles end up
## Note that BASEDIR defaults to $HOME, but can be set before calling the
## ftpsync script to any value you want (for example using pam_env)
#LOGDIR="${BASEDIR}/log"
## Name of our own logfile.
## Note that ${NAME} is set by the ftpsync script depending on the way it
## is called. See README for a description of the multi-archive capability
## and better always include ${NAME} in this path.
#LOG="${LOGDIR}/${NAME}.log"
## The script can send logs (or error messages) to a mail address.
## If this is unset it will default to the local root user unless it is run
## on a .debian.org machine where it will default to the mirroradm people.
MAILTO="mirror"
## If you do want a mail about every single sync, set this to false
## Everything else will only send mails if a mirror sync fails
#ERRORSONLY="true"
## If you want the logs to also include output of rsync, set this to true.
## Careful, the logs can get pretty big, especially if it is the first mirror
## run
#FULLLOGS="false"
## If you do want to exclude files from the mirror run, put --exclude statements here.
## See rsync(1) for the exact syntax, these are passed to rsync as written here.
## DO NOT TRY TO EXCLUDE ARCHITECTURES OR SUITES WITH THIS, IT WILL NOT WORK!
#EXCLUDE=""
## If you do want to exclude an architecture, this is for you.
## Use as space seperated list.
## Possible values are:
## alpha, amd64, arm, armel, hppa, hurd-i386, i386, ia64, kfreebsd-amd64,
## kfreebsd-i386, m68k, mipsel, mips, powerpc, s390, sh and sparc
## eg. ARCH_EXCLUDE="alpha arm armel mipsel mips s390 sparc"
## An unset value will mirror all architectures (default!)
#ARCH_EXCLUDE=""
## Do we have leaf mirror to signal we are done and they should sync?
## If so set it to true and make sure you configure runmirrors.mirrors
## and runmirrors.conf for your need.
#HUB=false
## We do create three logfiles for every run. To save space we rotate them, this
## defines how many we keep
#LOGROTATE=14
## Our own lockfile (only one sync should run at any time)
#LOCK="${TO}/Archive-Update-in-Progress-${MIRRORNAME}"
# Timeout for the lockfile, in case we have bash older than v4 (and no /proc)
# LOCKTIMEOUT=${LOCKTIMEOUT:-3600}
## The following file is used to make sure we will end up with a correctly
## synced mirror even if we get multiple pushes in a short timeframe
#UPDATEREQUIRED="${TO}/Archive-Update-Required-${MIRRORNAME}"
## The trace file is used by a mirror check tool to see when we last
## had a successful mirror sync. Make sure that it always ends up in
## project/trace and always shows the full hostname.
## This is *relative* to ${TO}
#TRACE="project/trace/${MIRRORNAME}"
## We sync our mirror using rsync (everything else would be insane), so
## we need a few options set.
## The rsync program
#RSYNC=rsync
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## Default rsync options every rsync invocation sees.
#RSYNC_OPTIONS="-rltvHSB8192 --timeout 3600 --stats --exclude Archive-Update-in-Progress-${MIRRORNAME} --exclude ${TRACE} --exclude Archive-Update-Required-${MIRRORNAME}"
## Options the first pass gets. We do not want the Packages/Source indices
## here, and we also do not want to delete any files yet.
#RSYNC_OPTIONS1="--exclude Packages* --exclude Sources* --exclude Release* --exclude ls-lR*"
## Options the second pass gets. Now we want the Packages/Source indices too
## and we also want to delete files. We also want to delete files that are
## excluded.
#RSYNC_OPTIONS2="--max-delete=40000 --delay-updates --delete --delete-after --delete-excluded"
## You may establish the connection via a web proxy by setting the environment
## variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy. Note
## that your web proxy's configuration must support proxy connections to port 873.
# RSYNC_PROXY=
## The following three options are used in case we want to "callback" the host
## we got pushed from.
#CALLBACKUSER="archvsync"
#CALLBACKHOST="none"
#CALLBACKKEY="none"
## Hook scripts can be run at various places during the sync.
## Leave them blank if you don't want any
## Hook1: After lock is acquired, before first rsync
## Hook2: After first rsync, if successful
## Hook3: After second rsync, if successful
## Hook4: Right before leaf mirror triggering
## Hook5: After leaf mirror trigger, only if we have slave mirrors (HUB=true)
##
## Note that Hook3 and Hook4 are likely to be called directly after each other.
## Difference is: Hook3 is called *every* time the second rsync was successful,
## but even if the mirroring needs to re-run thanks to a second push.
## Hook4 is only effective if we are done with mirroring.
#HOOK1=
#HOOK2=
#HOOK3=
#HOOK4=
#HOOK5=

148
debian/etc/ftpsync.conf.sample vendored Normal file
View File

@ -0,0 +1,148 @@
########################################################################
########################################################################
## This is a sample configuration file for the ftpsync mirror script. ##
## Most of the values are commented out and just shown here for ##
## completeness, together with their default value. ##
########################################################################
########################################################################
## Mirrorname. This is used for things like the trace file and should always
## be the full hostname of the mirror.
#MIRRORNAME=`hostname -f`
## Destination of the mirrored files. Should be an empty directory.
## CAREFUL, this directory will contain the mirror. Everything else
## that might have happened to be in there WILL BE GONE after the mirror sync!
#TO="/org/ftp.debian.org/ftp/"
## The upstream name of the rsync share.
#RSYNC_PATH="ftp"
## The host we mirror from
#RSYNC_HOST=some.mirror.debian.org
## In case we need a user to access the rsync share at our upstream host
#RSYNC_USER=
## If we need a user we also need a password
#RSYNC_PASSWORD=
## In which directory should logfiles end up
## Note that BASEDIR defaults to $HOME, but can be set before calling the
## ftpsync script to any value you want (for example using pam_env)
#LOGDIR="${BASEDIR}/log"
## Name of our own logfile.
## Note that ${NAME} is set by the ftpsync script depending on the way it
## is called. See README for a description of the multi-archive capability
## and better always include ${NAME} in this path.
#LOG="${LOGDIR}/${NAME}.log"
## The script can send logs (or error messages) to a mail address.
## If this is unset it will default to the local root user unless it is run
## on a .debian.org machine where it will default to the mirroradm people.
#MAILTO="root"
## If you do want a mail about every single sync, set this to false
## Everything else will only send mails if a mirror sync fails
#ERRORSONLY="true"
## If you want the logs to also include output of rsync, set this to true.
## Careful, the logs can get pretty big, especially if it is the first mirror
## run
#FULLLOGS="false"
## If you do want to exclude files from the mirror run, put --exclude statements here.
## See rsync(1) for the exact syntax, these are passed to rsync as written here.
## DO NOT TRY TO EXCLUDE ARCHITECTURES OR SUITES WITH THIS, IT WILL NOT WORK!
#EXCLUDE=""
## If you do want to exclude an architecture, this is for you.
## Use as space seperated list.
## Possible values are:
## alpha, amd64, arm, armel, hppa, hurd-i386, i386, ia64, kfreebsd-amd64,
## kfreebsd-i386, m68k, mipsel, mips, powerpc, s390, sh, sparc and source
## eg. ARCH_EXCLUDE="alpha arm armel mipsel mips s390 sparc"
## An unset value will mirror all architectures (default!)
#ARCH_EXCLUDE=""
## Do we have leaf mirror to signal we are done and they should sync?
## If so set it to true and make sure you configure runmirrors.mirrors
## and runmirrors.conf for your need.
#HUB=false
## We do create three logfiles for every run. To save space we rotate them, this
## defines how many we keep
#LOGROTATE=14
## Our own lockfile (only one sync should run at any time)
#LOCK="${TO}/Archive-Update-in-Progress-${MIRRORNAME}"
# Timeout for the lockfile, in case we have bash older than v4 (and no /proc)
# LOCKTIMEOUT=${LOCKTIMEOUT:-3600}
## The following file is used to make sure we will end up with a correctly
## synced mirror even if we get multiple pushes in a short timeframe
#UPDATEREQUIRED="${TO}/Archive-Update-Required-${MIRRORNAME}"
## The trace file is used by a mirror check tool to see when we last
## had a successful mirror sync. Make sure that it always ends up in
## project/trace and always shows the full hostname.
## This is *relative* to ${TO}
#TRACE="project/trace/${MIRRORNAME}"
## We sync our mirror using rsync (everything else would be insane), so
## we need a few options set.
## The rsync program
#RSYNC=rsync
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## limit I/O bandwidth. Value is KBytes per second, unset or 0 means unlimited
#RSYNC_BW=""
## Default rsync options every rsync invocation sees.
#RSYNC_OPTIONS="-prltvHSB8192 --timeout 3600 --stats --exclude Archive-Update-in-Progress-${MIRRORNAME} --exclude ${TRACE} --exclude Archive-Update-Required-${MIRRORNAME}"
## Options the first pass gets. We do not want the Packages/Source indices
## here, and we also do not want to delete any files yet.
#RSYNC_OPTIONS1="--exclude Packages* --exclude Sources* --exclude Release* --exclude InRelease --exclude ls-lR*"
## Options the second pass gets. Now we want the Packages/Source indices too
## and we also want to delete files. We also want to delete files that are
## excluded.
#RSYNC_OPTIONS2="--max-delete=40000 --delay-updates --delete --delete-after --delete-excluded"
## You may establish the connection via a web proxy by setting the environment
## variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy. Note
## that your web proxy's configuration must support proxy connections to port 873.
# RSYNC_PROXY=
## The following three options are used in case we want to "callback" the host
## we got pushed from.
#CALLBACKUSER="archvsync"
#CALLBACKHOST="none"
#CALLBACKKEY="none"
## Hook scripts can be run at various places during the sync.
## Leave them blank if you don't want any
## Hook1: After lock is acquired, before first rsync
## Hook2: After first rsync, if successful
## Hook3: After second rsync, if successful
## Hook4: Right before leaf mirror triggering
## Hook5: After leaf mirror trigger, only if we have slave mirrors (HUB=true)
##
## Note that Hook3 and Hook4 are likely to be called directly after each other.
## Difference is: Hook3 is called *every* time the second rsync was successful,
## but even if the mirroring needs to re-run thanks to a second push.
## Hook4 is only effective if we are done with mirroring.
#HOOK1=
#HOOK2=
#HOOK3=
#HOOK4=
#HOOK5=

40
debian/etc/pushpdo.conf.sample vendored Normal file
View File

@ -0,0 +1,40 @@
########################################################################
########################################################################
## This is a sample configuration file for the runmirror script. ##
## Most of the values are commented out and just shown here for ##
## completeness, together with their default value. ##
########################################################################
########################################################################
## Which ssh key to use?
#KEYFILE=.ssh/pushmirror
## The directory for our logfiles
#LOGDIR="${BASEDIR}/log"
## Our own logfile
#LOG="${LOGDIR}/${NAME}.log"
## Our lockfile directory
#LOCKDIR="${BASEDIR}/locks"
## We do create a logfile for every run. To save space we rotate it, this
## defines how many we keep
#LOGROTATE=14
## Our mirrorfile
#MIRRORS="${BASEDIR}/etc/${NAME}.mirror"
## extra ssh options we might want. *hostwide*
#SSH_OPTS=""
## The script can send logs (or error messages) to a mail address.
## If this is unset it will default to the local root user unless it is run
## on a .debian.org machine where it will default to the mirroradm people.
#MAILTO="root"
## How long to wait for mirrors to do stage1 if we have multi-stage syncing
#PUSHDELAY=240
## which path to push
#MIRRORPATH="/org/packages.debian.org/mirror/"

21
debian/etc/pushpdo.mirror.sample vendored Normal file
View File

@ -0,0 +1,21 @@
# Definition of mirror hosts we push.
# One mirror per line, with the following fields defined.
#
# ShortName HostName User SSHProtocol SSHKeyFile
#
# ShortName will be used as a shorthand in logfile outputs and for the logfile
# where every ssh output gets redirected to.
#
# If no SSHKeyFile is given, the default from the config file applies.
# If SSHProtocol is empty, it will default to 2, but if you want to
# define a keyfile you HAVE TO set protocol too!
#
# Examples:
#
# piatti piatti.debian.org archvsync
# One special value is allowed: DELAY
# This word has to be on a line itself, followed by a space and a number.
# nothing else, not even whitespace. It will trigger a pause of $number
# seconds between the two mirrors. If no number is given it defaults to
# 60 seconds.
piatti piatti.debian.org archvsync

53
debian/etc/runmirrors.conf.sample vendored Normal file
View File

@ -0,0 +1,53 @@
########################################################################
########################################################################
## This is a sample configuration file for the runmirror script. ##
## Most of the values are commented out and just shown here for ##
## completeness, together with their default value. ##
########################################################################
########################################################################
## Which ssh key to use?
#KEYFILE=.ssh/pushmirror
## The directory for our logfiles
#LOGDIR="${BASEDIR}/log"
## Our own logfile
#LOG="${LOGDIR}/${NAME}.log"
## Our lockfile directory
#LOCKDIR="${BASEDIR}/locks"
## We do create a logfile for every run. To save space we rotate it, this
## defines how many we keep
#LOGROTATE=14
## Our mirrorfile
#MIRRORS="${BASEDIR}/etc/${NAME}.mirror"
## extra ssh options we might want. *hostwide*
## By default, ignore ssh key change of leafs
#SSH_OPTS="-o StrictHostKeyChecking=no"
## The script can send logs (or error messages) to a mail address.
## If this is unset it will default to the local root user unless it is run
## on a .debian.org machine where it will default to the mirroradm people.
#MAILTO="root"
## Whats our archive name? We will also tell our leafs about it
## This is usually empty, but if we are called as "runmirrors bpo"
## it will default to bpo. This way one runmirrors script can serve
## multiple archives, similar to what ftpsync does.
#PUSHARCHIVE="${CONF}"
## How long to wait for mirrors to do stage1 if we have multi-stage syncing
#PUSHDELAY=600
## Hook scripts can be run at various places.
## Leave them blank/commented out if you don't want any
## Hook1: After reading config, before doing the first real action
## Hook2: Between two hosts to push
## Hook3: When everything is done
#HOOK1=""
#HOOK2=""
#HOOK3=""

72
debian/etc/runmirrors.mirror.sample vendored Normal file
View File

@ -0,0 +1,72 @@
# Definition of mirror hosts we push.
# One mirror per line, with the following fields defined.
#
# Type ShortName HostName User SSHProtocol SSHKeyFile
#
# ALTERNATIVELY the line may look like
#
# Type ShortName HostName User -$SOMESSHOPTION
#
# The fields Type, ShortName, HostName and User are *mandantory*.
#
# Type is either all, staged or mhop, meaning:
# all - do a "normal" push. Trigger them, go on.
# staged - do a two-stage push, waiting for them after stage 2(and all others that
# are staged) before doing stage2
# mhop - send a multi-hop staged push. This will tell the mirror to initiate
# a mhop/stage1 push to its staged/mhop mirrors and then exit.
# When all mhop got back we then send the stage2 through to them.
#
# ShortName will be used as a shorthand in logfile outputs and for the logfile
# where every ssh output gets redirected to.
#
# If no SSHKeyFile is given, the default from the config file applies.
# If SSHProtocol is empty, it will default to 2, but if you want to
# define a keyfile you HAVE TO set protocol too!
#
# With the ALTERNATIVE syntax you are able to use any special ssh option
# you want just for one special mirror. The option after the username
# then MUST start with a -, in which case the whole rest of the line is taken
# as a set of options to give to ssh, it is passed through without doing
# anything with it.
#
# There is one caveat here: Should you want to use the -i option to give
# another ssh key to use, keep in mind that the master keyfile will
# always be presented too! That is, ssh will show both keys to the other
# side and the first one presented wins. Which might not be the key you
# want. There is currently no way around this, as that would mean
# dropping backward compatibility.
#
# Backwards compatibility:
# An older runmirrors script will NOT run with a newer runmirrors.mirror file, but
# a new runmirrors can run with an old runmirrors.mirror file. This should make updates
# painless.
#
# Examples:
# all eu.puccini puccini.debian.org archvsync 2
#
# -> will push puccini.debian.org, user archvsync, using ssh protocol 2
# and the globally configured ssh key.
#
# all eu.puccini puccini.debian.org archvsync -p 2222
#
# -> will do the same as above, but use port 2222 to connect to.
#
# staged eu.puccini puccini.debian.org archvsync
# staged eu.powell powell.debian.org archvsync
#
# -> will push both puccini and powell in stage1, waiting for both to
# finish stage1 before stage2 gets pushed. The username will be archvsync.
#
# staged eu.puccini puccini.debian.org archvsync
# mhop eu.powell powell.debian.org archvsync
#
# -> will do the same as above, but powell gets told about mhop and can then
# push its own staged/mhop mirrors before returning. When both returned
# then stage2 is sent to both.
#
# One special value is allowed: DELAY
# This word has to be on a line itself, followed by a space and a number.
# nothing else, not even whitespace. It will trigger a pause of $number
# seconds between the two mirrors. If no number is given it defaults to
# 600 seconds.

0
debian/etc/secrets/.dummy vendored Normal file
View File

121
debian/etc/websync.conf.sample vendored Normal file
View File

@ -0,0 +1,121 @@
########################################################################
########################################################################
## This is a sample configuration file for the ftpsync mirror script. ##
## Most of the values are commented out and just shown here for ##
## completeness, together with their default value. ##
########################################################################
########################################################################
## Mirrorname. This is used for things like the trace file and should always
## be the full hostname of the mirror.
#MIRRORNAME=${MIRRORNAME:-`hostname -f`}
## Destination of the mirrored files. Should be an empty directory.
## CAREFUL, this directory will contain the mirror. Everything else
## that might have happened to be in there WILL BE GONE after the mirror sync!
#TO=${TO:-"/org/www.debian.org/www"}
## The upstream name of the rsync share.
#RSYNC_PATH="web.debian.org"
## The host we mirror from
#RSYNC_HOST=www-master.debian.org
## In case we need a user to access the rsync share at our upstream host
#RSYNC_USER=
## If we need a user we also need a password
#RSYNC_PASSWORD=
## In which directory should logfiles end up
## Note that BASEDIR defaults to $HOME, but can be set before calling the
## ftpsync script to any value you want (for example using pam_env)
#LOGDIR="${BASEDIR}/log"
## Name of our own logfile.
## Note that ${NAME} is set by the websync script
#LOG="${LOGDIR}/${NAME}.log"
## The script can send logs (or error messages) to a mail address.
## If this is unset it will default to the local root user unless it is run
## on a .debian.org machine where it will default to the mirroradm people.
#MAILTO="root"
## If you do want a mail about every single sync, set this to false
## Everything else will only send mails if a mirror sync fails
#ERRORSONLY="true"
## If you want the logs to also include output of rsync, set this to true.
## Careful, the logs can get pretty big, especially if it is the first mirror
## run
#FULLLOGS="false"
## If you do want to exclude files from the mirror run, put --exclude statements here.
## See rsync(1) for the exact syntax, these are passed to rsync as written here.
## Please do not use this except for rare cases and after you talked to us.
## For the sponsor logos see SITE_FILTER
#EXCLUDE=${EXCLUDE:-"--exclude ${HOSTNAME}"}
## And site specific excludes, by default its the sponsor stuff that should be local to all
#SITE_FILTER=${SITE_FILTER:-"--include sponsor.deb.* --exclude sponsor_img.* --exclude sponsor.html --exclude sponsor.*.html --filter=protect_sponsor_img.* --filter=protect_sponsor.html --filter=protect_sponsor.*.html"}
## Do we have leaf mirror to signal we are done and they should sync?
## If so set it to true and make sure you configure runmirrors-websync.mirrors
## and runmirrors-websync.conf for your need.
#HUB=false
## We do create three logfiles for every run. To save space we rotate them, this
## defines how many we keep
#LOGROTATE=14
## Our own lockfile (only one sync should run at any time)
#LOCK="${TO}/Website-Update-in-Progress-${MIRRORNAME}"
## The following file is used to make sure we will end up with a correctly
## synced mirror even if we get multiple pushes in a short timeframe
#UPDATEREQUIRED="${TO}/Website-Update-Required-${MIRRORNAME}"
## The trace file is used by a mirror check tool to see when we last
## had a successful mirror sync. Make sure that it always ends up in
## .project/trace and always shows the full hostname.
## This is *relative* to ${TO}
#TRACE=".project/trace/${MIRRORNAME}"
## We sync our mirror using rsync (everything else would be insane), so
## we need a few options set.
## The rsync program
#RSYNC=rsync
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## BE VERY CAREFUL WHEN YOU CHANGE THE RSYNC_OPTIONS! BETTER DON'T!
## Default rsync options every rsync invocation sees.
#RSYNC_OPTIONS="-prltvHSB8192 --timeout 3600 --stats --exclude Archive-Update-in-Progress-${MIRRORNAME} --exclude ${TRACE} --exclude Archive-Update-Required-${MIRRORNAME}"
## Default rsync options
#RSYNC_OPTIONS2=${RSYNC_OPTIONS2:-"--max-delete=40000 --delay-updates --delete --delete-after --delete-excluded"}
## You may establish the connection via a web proxy by setting the environment
## variable RSYNC_PROXY to a hostname:port pair pointing to your web proxy. Note
## that your web proxy's configuration must support proxy connections to port 873.
# RSYNC_PROXY=
## Hook scripts can be run at various places during the sync.
## Leave them blank if you don't want any
## Hook1: After lock is acquired, before first rsync
## Hook2: After first rsync, if successful
## Hook3: After second rsync, if successful
## Hook4: Right before leaf mirror triggering
## Hook5: After leaf mirror trigger, only if we have slave mirrors (HUB=true)
##
## Note that Hook3 and Hook4 are likely to be called directly after each other.
## Difference is: Hook3 is called *every* time the second rsync was successful,
## but even if the mirroring needs to re-run thanks to a second push.
## Hook4 is only effective if we are done with mirroring.
#HOOK1=
#HOOK2=
#HOOK3=
#HOOK4=
#HOOK5=

1371
debian/mirrorcheck/bin/dmc-archive.pl vendored Executable file

File diff suppressed because it is too large Load Diff

1371
debian/mirrorcheck/bin/dmc.pl vendored Executable file

File diff suppressed because it is too large Load Diff

0
debian/mirrorcheck/www/.dummy vendored Normal file
View File

1
foooooooo Normal file
View File

@ -0,0 +1 @@
~/bin/csc-sync-ssh uw-coursewear/cs136 linux024.student.cs.uwaterloo.ca /u/cs136/mirror.uwaterloo.ca csc01 ~/.ssh/id_rsa_csc01

View File

@ -0,0 +1,3 @@
<!--#include virtual="/include/ubar.txt" -->
<!-- The bandwidth bar program is available at:
http://www.kernel.org/pub/software/web/bwbar/ -->

View File

@ -0,0 +1,5 @@
<div class="csclogo">
<a href="http://mirror.csclub.uwaterloo.ca">
<img src="/include/header.png" alt="Computer Science Club Mirror - The University of Waterloo - Funded by MEF" />
</a>
</div>

View File

@ -0,0 +1,7 @@
img {
border-width: 0;
}
div.biglogo {
height: 100px;
}

BIN
git_old/include/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
git_old/include/header.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

11
git_old/include/motd.msg Normal file
View File

@ -0,0 +1,11 @@
*
* Welcome to the University of Waterloo Computer Science Club Mirror
*
* http://csclub.uwaterloo.ca/
*
* Hardware funded by MEF (http://www.mef.uwaterloo.ca/)
*
* Admin Contact: systems-committee@csclub.uwaterloo.ca
* Hostname: mirror.csclub.uwaterloo.ca
* IP Address: 129.97.134.71
*

View File

@ -0,0 +1,2 @@
User-agent: *
Disallow: /

119
git_old/misc/debian-check-md5sum Executable file
View File

@ -0,0 +1,119 @@
#!/usr/bin/python2.5
import sys, os, re, gzip, bz2, hashlib
package_file_map = {
'Packages' : file,
'Packages.gz' : gzip.GzipFile,
'Packages.bz2' : bz2.BZ2File,
'Sources' : file,
'Sources.gz' : gzip.GzipFile,
'Sources.bz2' : bz2.BZ2File,
}
def parse_packages_file(path):
try:
open_func = package_file_map[os.path.basename(path)]
file = open_func(path)
except IOError, e:
print "WARNING: failed to open %s: %s" % (path, e)
return {}
cur_dict = {}
key, value = None, ''
ret_list = []
while True:
try:
line = file.readline()
except IOError, e:
print "WARNING: failed to read %s: %s" % (path, e)
print "WARNING: %s" % e
return {}
# check if we are done with current value
if (line == '' or line[0] == '\n' or line[0] != ' ') and key != None:
cur_dict[key] = value
if line == '' or line == '\n': # done current block
if cur_dict != {}:
ret_list.append(cur_dict)
cur_dict = {}
key = None
if line == '': break
elif line[0] == ' ': # multi-line value
value += '\n' + line[1:-1]
else:
if line[-1] == '\n': line = line[:-1]
pos = line.find(':')
key = line[:pos]
if key == '': key = None
value = line[pos+2:]
return ret_list
def find_packages_files(path):
files = []
for file in os.listdir(path):
file_path = "%s/%s" % (path, file)
if os.path.islink(file_path):
continue
elif os.path.isdir(file_path):
files += find_packages_files(file_path)
elif file in package_file_map:
files.append(file_path)
return files
if len(sys.argv) != 2:
print "Usage: debian-check-md5sum.py base-dir"
sys.exit(1)
base_dir = sys.argv[1]
all = {}
files_regex = re.compile('(\S+)\s+(\S+)\s+(\S+)')
for file in find_packages_files(base_dir):
file_type = os.path.basename(file).split('.')[0]
a = parse_packages_file(file)
for package in parse_packages_file(file):
if file_type == 'Packages':
if 'Filename' in package:
all[package['Filename']] = package
elif file_type == 'Sources':
files = package['Files'].split('\n')
for file in files:
if file == '': continue
match = files_regex.match(file)
file_path = '%s/%s' % (package['Directory'], match.group(3))
all[file_path] = { 'MD5sum' : match.group(1) }
print "NOTICE: need to check %d files" % len(all)
ret_val = 0
block_size = 65536
for (file, package) in all.iteritems():
path = '%s/%s' % (base_dir, file)
try:
file = open(path, 'rb')
except IOError:
print "WARNING: missing %s" % path
continue
if 'SHA256' in package:
md = hashlib.sha256()
hash = package['SHA256']
elif 'SHA1' in package:
md = hashlib.sha1()
hash = package['SHA1']
elif 'MD5sum' in package:
md = hashlib.md5()
hash = package['MD5sum']
else:
print "WARNING: no hash found for %s" % path
print package
exit(1)
while True:
data = file.read(block_size)
if data == '': break
md.update(data)
hash_calc = md.hexdigest()
if hash == hash_calc:
print "NOTICE: hash ok for %s [hash = %s]" % (path, hash)
else:
print "ERROR: hash mismatch for %s [hash = %s, hash_calc = %s]" % \
(path, hash, hash_calc)
ret_val = 1
exit(ret_val)

View File

@ -0,0 +1,22 @@
# /etc/cron.d/csc-mirror: mirror cron jobs
# m h dom mon dow user command
# update orion routes
30 5 * * * root /usr/local/sbin/update-orion-routes
# make torrents
*/10 * * * * mirror /home/mirror/bin/make-torrents > /dev/null 2> /dev/null
# The rsync cron jobs are now run by a small script a2brenna wrote
# that works a bit more intelligently than cron. For one thing, it
# won't kick off a sync when one's already running. Please see
# ~mirror/merlin.
# -- mspang
# regenerate mirror index at 5:40 am on 14th & 28th of every month
# feel free to run this manually if you've added or removed an
# archive or some such thing
#
# Documented here: http://wiki.csclub.uwaterloo.ca/Mirror#Index
40 5 */14 * * mirror cd /home/mirror/mirror-index && /home/mirror/mirror-index/make-index.py

View File

@ -0,0 +1,48 @@
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The routes added here will not be visible to the 'route' command; you
# should use 'ip route show table foo' instead.
auto eth0
iface eth0 inet static
address 129.97.134.42
netmask 255.255.255.0
gateway 129.97.134.1
# campus routes are checked first and are maintained here
up ip rule add from all lookup campus prio 1
down ip rule del from all lookup campus prio 1
up ip route add 129.97.0.0/16 via 129.97.134.1 dev eth0 table campus realm campus
down ip route del 129.97.0.0/16 via 129.97.134.1 dev eth0 table campus realm campus
up ip route add 10.0.0.0/8 via 129.97.134.1 dev eth0 table campus realm campus
down ip route del 10.0.0.0/8 via 129.97.134.1 dev eth0 table campus realm campus
up ip route add 172.16.0.0/20 via 129.97.134.1 dev eth0 table campus realm campus
down ip route del 172.16.0.0/20 via 129.97.134.1 dev eth0 table campus realm campus
up ip route add 192.168.0.0/16 via 129.97.134.1 dev eth0 table campus realm campus
down ip route del 192.168.0.0/16 via 129.97.134.1 dev eth0 table campus realm campus
# orion routes are checked second and are maintained by a cronjob
up ip rule add from all lookup orion prio 2
down ip rule del from all lookup orion prio 2
# Traffic shaping - 100M cogent, 200M orion, 700M campus.
# Note that the border router is configured with a similar policy, but will
# drop rather than queue excess packets. These rules keep them from dropping.
up tc qdisc add dev eth0 parent root handle 1: htb default 2 r2q 10000
up tc class add dev eth0 parent 1: classid 1:1 htb rate 1000Mbit
up tc class add dev eth0 parent 1:1 classid 1:2 htb rate 100Mbit
up tc class add dev eth0 parent 1:1 classid 1:3 htb rate 200Mbit
up tc class add dev eth0 parent 1:1 classid 1:4 htb rate 700Mbit ceil 1000Mbit
up tc filter add dev eth0 parent 1: protocol ip pref 2 route to orion flowid 1:3
up tc filter add dev eth0 parent 1: protocol ip pref 1 route to campus flowid 1:4
down tc qdisc del dev eth0 parent root
auto eth0:mirror
iface eth0:mirror inet static
address 129.97.134.71
netmask 255.255.255.0

186
git_old/routing/orionroutes.py Executable file
View File

@ -0,0 +1,186 @@
#!/usr/bin/python
# This file updates the orion routing table.
# Put it at /usr/local/sbin/orionroutes.py
# Configuration
ORION_TABLE = 1 # from /etc/iproute2/rt_tables
ORION_REALMS = 1 # from /etc/iproute2/rt_realms
ORION_VIAS = [ "66.97.23.33", "66.97.28.65", "129.97.1.46" ]
ORION_GW = "129.97.134.1"
ORION_SRC = "129.97.134.42"
ORION_IFACE = "eth0"
# Don't touch anything beyond here
import sys, iplib, SubnetTree
from ctypes import *
NETLINK_ROUTE = 0
AF_UNSPEC = 0
RT_SCOPE_UNIVERSE = 0
RTPROT_STATIC = 4
NLM_F_REPLACE = 0x100
def die(msg):
sys.stderr.write("orionroutes.py: %s\n" % msg)
sys.exit(1)
try:
libnl = cdll.LoadLibrary("libnl.so.1")
nl_geterror = CFUNCTYPE(c_char_p) (("nl_geterror", libnl), None)
nl_handle_alloc = CFUNCTYPE(c_void_p) (("nl_handle_alloc", libnl), None)
nl_connect = CFUNCTYPE(c_int, c_void_p, c_int) \
(("nl_connect", libnl), ((1, "handle", None), (1, "type", NETLINK_ROUTE)))
rtnl_route_alloc = CFUNCTYPE(c_void_p) (("rtnl_route_alloc", libnl), None)
rtnl_link_alloc_cache = CFUNCTYPE(c_void_p, c_void_p) \
(("rtnl_link_alloc_cache", libnl), ((1, "handle", None), ))
rtnl_link_name2i = CFUNCTYPE(c_int, c_void_p, c_char_p) \
(("rtnl_link_name2i", libnl), ((1, "cache", None), (1, "iface", -1)))
rtnl_route_set_oif = CFUNCTYPE(c_void_p, c_void_p, c_int) \
(("rtnl_route_set_oif", libnl), ((1, "route", None), (1, "iface", -1)))
nl_cache_free = CFUNCTYPE(None, c_void_p) \
(("nl_cache_free", libnl), ((1, "cache", None), ))
nl_addr_parse = CFUNCTYPE(c_void_p, c_char_p, c_int) \
(("nl_addr_parse", libnl), ((1, "dst", None), (1, "family", AF_UNSPEC)))
rtnl_route_set_dst = CFUNCTYPE(c_int, c_void_p, c_void_p) \
(("rtnl_route_set_dst", libnl), ((1, "route", None), (1, "dst", None)))
rtnl_route_set_pref_src = CFUNCTYPE(c_int, c_void_p, c_void_p) \
(("rtnl_route_set_pref_src", libnl), ((1, "route", None), (1, "src", None)))
nl_addr_put = CFUNCTYPE(None, c_void_p) \
(("nl_addr_put", libnl), ((1, "addr", None), ))
rtnl_route_set_gateway = CFUNCTYPE(c_int, c_void_p, c_void_p) \
(("rtnl_route_set_gateway", libnl), ((1, "route", None), (1, "gw", None)))
rtnl_route_set_table = CFUNCTYPE(None, c_void_p, c_int) \
(("rtnl_route_set_table", libnl), ((1, "route", None), (1, "table", -1)))
rtnl_route_set_scope = CFUNCTYPE(None, c_void_p, c_int) \
(("rtnl_route_set_scope", libnl), ((1, "route", None), (1, "scope", -1)))
rtnl_route_set_protocol = CFUNCTYPE(None, c_void_p, c_int) \
(("rtnl_route_set_protocol", libnl), ((1, "route", None), (1, "proto", -1)))
rtnl_route_set_realms = CFUNCTYPE(None, c_void_p, c_int) \
(("rtnl_route_set_realms", libnl), ((1, "route", None), (1, "realms", -1)))
rtnl_route_add = CFUNCTYPE(c_int, c_void_p, c_void_p, c_int) \
(("rtnl_route_add", libnl), ((1, "handle", None), (1, "route", None), (1, "flags", 0)))
rtnl_route_put = CFUNCTYPE(None, c_void_p) \
(("rtnl_route_put", libnl), ((1, "route", None), ))
nl_handle_destroy = CFUNCTYPE(None, c_void_p) \
(("nl_handle_destroy", libnl), ((1, "handle", None), ))
rtnl_route_alloc_cache = CFUNCTYPE(c_void_p, c_void_p) \
(("rtnl_route_alloc_cache", libnl), ((1, "handle", None), ))
nl_cache_get_first = CFUNCTYPE(c_void_p, c_void_p) \
(("nl_cache_get_first", libnl), ((1, "cache", None), ))
rtnl_route_get_table = CFUNCTYPE(c_int, c_void_p) \
(("rtnl_route_get_table", libnl), ((1, "route", None), ))
rtnl_route_get_dst = CFUNCTYPE(c_void_p, c_void_p) \
(("rtnl_route_get_dst", libnl), ((1, "route", None), ))
nl_addr2str = CFUNCTYPE(c_char_p, c_void_p, c_char_p, c_int) \
(("nl_addr2str", libnl), ((1, "addr", None), (1, "buffer", None), (1, "size", 0)))
rtnl_route_del = CFUNCTYPE(c_int, c_void_p, c_void_p, c_int) \
(("rtnl_route_del", libnl), ((1, "handle", None), (1, "route", None), (1, "flags", 0)))
nl_cache_get_next = CFUNCTYPE(c_void_p, c_void_p) \
(("nl_cache_get_next", libnl), ((1, "object", None), ))
except Exception,e:
die("Failed to load libnl: %s" % e)
def nl_die(func):
die("%s: %s" % (func, nl_geterror()))
ips = [[] for i in range(33)]
for line in sys.stdin:
try:
ip, mask, via = line.strip().split(',')[0:3]
except KeyError, ValueError:
die("Malformed line: %s" % line.strip())
if via not in ORION_VIAS:
continue
bits = int(iplib.IPv4NetMask(mask).get_bits())
ips[bits].append(int(iplib.IPv4Address(ip)))
count = sum([len(ip_list) for ip_list in ips])
if count < 10:
die("Not enough routes (got %d)" % count)
cidrs = []
for bits in range(32, 1, -1):
ips[bits].sort()
last_ip = 0
for ip in ips[bits]:
if ip != last_ip and (ip ^ last_ip) == (1 << (32 - bits)):
ips[bits - 1].append(ip & (((1 << (bits - 1)) - 1) << (32 - (bits - 1))))
last_ip = 0
elif last_ip != 0:
cidrs.append((iplib.IPv4Address(last_ip), bits))
last_ip = ip
if last_ip != 0:
cidrs.append((iplib.IPv4Address(last_ip), bits))
nlh = nl_handle_alloc()
if nlh == None: nl_die("nl_handle_alloc")
if nl_connect(nlh, NETLINK_ROUTE) < 0: nl_die("nl_connect")
link_cache = rtnl_link_alloc_cache(nlh)
if link_cache == None: nl_die("rtnl_link_alloc")
iface = rtnl_link_name2i(link_cache, ORION_IFACE)
if iface < 0: nl_die("rtnl_link_name2i")
nl_cache_free(link_cache)
cidrs.sort(lambda (ip1, bits1), (ip2, bits2): cmp(ip1, ip2) if bits1 == bits2 else (bits1 - bits2))
tree = SubnetTree.SubnetTree()
for (ip, bits) in cidrs:
if str(ip) not in tree:
cidr = "%s/%s" % (ip, bits)
tree[cidr] = None
route = rtnl_route_alloc()
if route == None: nl_die("rtnl_route_alloc")
dstaddr = nl_addr_parse(cidr, AF_UNSPEC)
if dstaddr == None: nl_die("nl_addr_parse(%s)" % cidr)
if rtnl_route_set_dst(route, dstaddr) < 0: nl_die("rtnl_route_set_dst")
nl_addr_put(dstaddr)
srcaddr = nl_addr_parse(ORION_SRC, AF_UNSPEC)
if srcaddr == None: nl_die("nl_addr_parse(%s)" % ORION_SRC)
if rtnl_route_set_pref_src(route, srcaddr) < 0: nl_die("nl_route_set_pref_src")
nl_addr_put(srcaddr)
gwaddr = nl_addr_parse(ORION_GW, AF_UNSPEC)
if gwaddr == None: nl_die("nl_addr_parse(%s)" % ORION_GW)
if rtnl_route_set_gateway(route, gwaddr) < 0: nl_die("nl_route_set_gateway")
nl_addr_put(gwaddr)
rtnl_route_set_oif(route, iface)
rtnl_route_set_table(route, ORION_TABLE)
rtnl_route_set_scope(route, RT_SCOPE_UNIVERSE)
rtnl_route_set_protocol(route, RTPROT_STATIC)
rtnl_route_set_realms(route, ORION_REALMS)
if rtnl_route_add(nlh, route, NLM_F_REPLACE) < 0: nl_die("rtnl_route_add(dst=%s)" % cidr)
rtnl_route_put(route)
route_cache = rtnl_route_alloc_cache(nlh)
if route_cache == None: nl_die("rtnl_route_alloc_cache")
dstaddr_s = create_string_buffer(100)
route = nl_cache_get_first(route_cache)
while route != None:
table = rtnl_route_get_table(route)
if table != ORION_TABLE:
route = nl_cache_get_next(route)
continue
dstaddr = rtnl_route_get_dst(route)
if dstaddr == None:
continue
if nl_addr2str(dstaddr, dstaddr_s, sizeof(dstaddr_s)) == None: nl_die("nl_addr2str")
dstaddr = str(repr(dstaddr_s.value)).strip('\'').split('/')[0]
if dstaddr not in tree:
rtnl_route_del(nlh, route, 0)
route = nl_cache_get_next(route)
nl_cache_free(route_cache)
nl_handle_destroy(nlh)

11
git_old/routing/rt_realms Normal file
View File

@ -0,0 +1,11 @@
# This file lives at /etc/iproute2/rt_realms
#
# reserved values
#
0 cosmos
#
# local
#
1 orion
2 campus

14
git_old/routing/rt_tables Normal file
View File

@ -0,0 +1,14 @@
# This file lives at /etc/iproute2/rt_tables
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
1 orion
2 campus

View File

@ -0,0 +1,6 @@
#!/bin/bash
# This file updates the orion routing table.
# Put it at /usr/local/sbin/update-orion-routes.
wget --quiet -O - https://istns.uwaterloo.ca/borderroutes/borderroutes.txt | /usr/local/sbin/orionroutes.py

View File

@ -0,0 +1,102 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include <libgen.h>
#include <netlink/route/class.h>
#include <netlink/route/link.h>
#include <netlink/cache-api.h>
#include <netlink/object.h>
#include "mirror-nl-glue.h"
static struct nl_cache *link_cache, *class_cache;
static struct rtnl_link *eth;
static int ifindex;
struct class_info cogent_class = { "cogent", "01:02", };
struct class_info orion_class = { "orion", "01:03", };
struct class_info campus_class = { "campus", "01:04", };
static struct nl_handle *nl_handle;
void die(const char *message) {
fprintf(stderr, "fatal: %s\n", message);
exit(1);
}
static void match_obj(struct nl_object *obj, void *arg) {
struct nl_object *needle = *(struct nl_object **)arg;
struct nl_object **ret = (struct nl_object **)arg + 1;
if (!*ret && nl_object_identical(obj, needle)) {
nl_object_get(obj);
*ret = obj;
}
}
static struct rtnl_class *get_class_by_id(char *id, int ifindex) {
uint32_t handle;
struct rtnl_class *needle;
struct nl_object *magic[2];
if (rtnl_tc_str2handle(id, &handle))
die("invalid id");
needle = rtnl_class_alloc();
rtnl_class_set_ifindex(needle, ifindex);
rtnl_class_set_handle(needle, handle);
magic[0] = (struct nl_object *)needle;
magic[1] = (struct nl_object *)NULL;
nl_cache_foreach(class_cache, match_obj, magic);
rtnl_class_put(needle);
return (struct rtnl_class *)magic[1];
}
uint64_t get_class_byte_count(struct class_info *info) {
struct rtnl_class *class = get_class_by_id(info->id, ifindex);
uint64_t bytes;
if (!class)
die("class not found");
bytes = rtnl_class_get_stat(class, RTNL_TC_BYTES);
rtnl_class_put(class);
return bytes;
}
void mirror_stats_refresh(void) {
nl_cache_refill(nl_handle, class_cache);
}
void mirror_stats_initialize(void) {
nl_handle = nl_handle_alloc();
if (!nl_handle)
die("unable to allocate handle");
if (nl_connect(nl_handle, NETLINK_ROUTE) < 0)
die("unable to connect to netlink");
link_cache = rtnl_link_alloc_cache(nl_handle);
if (!link_cache)
die("unable to allocate link cache");
eth = rtnl_link_get_by_name(link_cache, "eth0");
if (!eth)
die("unable to acquire eth0");
ifindex = rtnl_link_get_ifindex(eth);
class_cache = rtnl_class_alloc_cache(nl_handle, ifindex);
if (!class_cache)
die("unable to allocate class cache");
}
void mirror_stats_cleanup(void) {
rtnl_link_put(eth);
nl_cache_free(class_cache);
nl_cache_free(link_cache);
nl_close(nl_handle);
nl_handle_destroy(nl_handle);
}

View File

@ -0,0 +1,25 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include <libgen.h>
#include <netlink/route/class.h>
#include <netlink/route/link.h>
#include <netlink/cache-api.h>
#include <netlink/object.h>
struct class_info {
char *name;
char *id;
};
extern struct class_info cogent_class;
extern struct class_info orion_class;
extern struct class_info campus_class;
void mirror_stats_refresh(void);
void mirror_stats_initialize(void);
void mirror_stats_cleanup(void);
void die(const char *);
uint64_t get_class_byte_count(struct class_info *);

View File

@ -0,0 +1,40 @@
#include "mirror-nl-glue.h"
#include <rrd.h>
int main(void) {
char *argv[3];
unsigned long packet_count;
switch(fork()) {
case -1:
return -1;
case 0:
close(0);
close(1);
setsid();
break;
default:
_exit(0);
}
mirror_stats_initialize();
argv[0] = malloc(1024);
for (;;) {
packet_count = get_class_byte_count(&cogent_class);
snprintf(argv[0], 1024, "N:%lu", packet_count);
rrd_update_r("/var/rrdtool/cogent.rrd", NULL, 1, argv);
packet_count = get_class_byte_count(&orion_class);
snprintf(argv[0], 1024, "N:%lu", packet_count);
rrd_update_r("/var/rrdtool/orion.rrd", NULL, 1, argv);
packet_count = get_class_byte_count(&campus_class);
snprintf(argv[0], 1024, "N:%lu", packet_count);
rrd_update_r("/var/rrdtool/campus.rrd", NULL, 1, argv);
if (rrd_test_error()) {
fprintf(stderr, "ERROR: %s\n", rrd_get_error());
rrd_clear_error();
}
sleep(5);
mirror_stats_refresh();
}
}

View File

@ -0,0 +1,39 @@
#!/bin/sh
/usr/bin/rrdtool graph /mirror/root/stats_monthly.png \
-s -1m \
--imgformat=PNG \
--title='Mirror Traffic' \
--base=1000 \
--height=120 \
--width=600 \
--alt-autoscale-max \
--lower-limit=0 \
--vertical-label='bits per second' \
--slope-mode \
--font TITLE:10: \
--font AXIS:8: \
--font LEGEND:8: \
--font UNIT:8: \
DEF:a="/var/rrdtool/cogent.rrd":snmp_oid:AVERAGE \
DEF:b="/var/rrdtool/orion.rrd":snmp_oid:AVERAGE \
DEF:c="/var/rrdtool/campus.rrd":snmp_oid:AVERAGE \
CDEF:cdefa=a,8,* \
CDEF:cdefe=b,8,* \
CDEF:cdefi=c,8,* \
CDEF:cdefbc=TIME,1318318854,GT,a,a,UN,0,a,IF,IF,TIME,1318318854,GT,b,b,UN,0,b,IF,IF,TIME,1318318854,GT,c,c,UN,0,c,IF,IF,+,+,8,* \
AREA:cdefa#157419FF:"Cogent" \
GPRINT:cdefa:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefa:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefa:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefe#00CF00FF:"Orion":STACK \
GPRINT:cdefe:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefe:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefe:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefi#EE5019FF:"Campus":STACK \
GPRINT:cdefi:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefi:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefi:MAX:"Maximum\:%8.2lf%s\n" \
LINE1:cdefbc#000000FF:"Total" \
GPRINT:cdefbc:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefbc:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefbc:MAX:"Maximum\:%8.2lf%s\n" >/dev/null 2>/dev/null

View File

@ -0,0 +1,39 @@
#!/bin/sh
/usr/bin/rrdtool graph /mirror/root/stats_yearly.png \
-s -1y \
--imgformat=PNG \
--title='Mirror Traffic' \
--base=1000 \
--height=120 \
--width=600 \
--alt-autoscale-max \
--lower-limit=0 \
--vertical-label='bits per second' \
--slope-mode \
--font TITLE:10: \
--font AXIS:8: \
--font LEGEND:8: \
--font UNIT:8: \
DEF:a="/var/rrdtool/cogent.rrd":snmp_oid:AVERAGE \
DEF:b="/var/rrdtool/orion.rrd":snmp_oid:AVERAGE \
DEF:c="/var/rrdtool/campus.rrd":snmp_oid:AVERAGE \
CDEF:cdefa=a,8,* \
CDEF:cdefe=b,8,* \
CDEF:cdefi=c,8,* \
CDEF:cdefbc=TIME,1318318854,GT,a,a,UN,0,a,IF,IF,TIME,1318318854,GT,b,b,UN,0,b,IF,IF,TIME,1318318854,GT,c,c,UN,0,c,IF,IF,+,+,8,* \
AREA:cdefa#157419FF:"Cogent" \
GPRINT:cdefa:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefa:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefa:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefe#00CF00FF:"Orion":STACK \
GPRINT:cdefe:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefe:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefe:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefi#EE5019FF:"Campus":STACK \
GPRINT:cdefi:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefi:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefi:MAX:"Maximum\:%8.2lf%s\n" \
LINE1:cdefbc#000000FF:"Total" \
GPRINT:cdefbc:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefbc:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefbc:MAX:"Maximum\:%8.2lf%s\n" >/dev/null 2>/dev/null

38
git_old/rrdtool/rrdgraph.sh Executable file
View File

@ -0,0 +1,38 @@
#!/bin/sh
/usr/bin/rrdtool graph /mirror/root/stats.png \
--imgformat=PNG \
--title='Mirror Traffic' \
--base=1000 \
--height=120 \
--width=600 \
--alt-autoscale-max \
--lower-limit=0 \
--vertical-label='bits per second' \
--slope-mode \
--font TITLE:10: \
--font AXIS:8: \
--font LEGEND:8: \
--font UNIT:8: \
DEF:a="/var/rrdtool/cogent.rrd":snmp_oid:AVERAGE \
DEF:b="/var/rrdtool/orion.rrd":snmp_oid:AVERAGE \
DEF:c="/var/rrdtool/campus.rrd":snmp_oid:AVERAGE \
CDEF:cdefa=a,8,* \
CDEF:cdefe=b,8,* \
CDEF:cdefi=c,8,* \
CDEF:cdefbc=TIME,1318318854,GT,a,a,UN,0,a,IF,IF,TIME,1318318854,GT,b,b,UN,0,b,IF,IF,TIME,1318318854,GT,c,c,UN,0,c,IF,IF,+,+,8,* \
AREA:cdefa#157419FF:"Cogent" \
GPRINT:cdefa:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefa:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefa:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefe#00CF00FF:"Orion":STACK \
GPRINT:cdefe:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefe:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefe:MAX:"Maximum\:%8.2lf%s\n" \
AREA:cdefi#EE5019FF:"Campus":STACK \
GPRINT:cdefi:LAST:"Current\:%8.2lf%s" \
GPRINT:cdefi:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefi:MAX:"Maximum\:%8.2lf%s\n" \
LINE1:cdefbc#000000FF:"Total" \
GPRINT:cdefbc:LAST:" Current\:%8.2lf%s" \
GPRINT:cdefbc:AVERAGE:"Average\:%8.2lf%s" \
GPRINT:cdefbc:MAX:"Maximum\:%8.2lf%s\n" >/dev/null 2>/dev/null

3
git_old/snmp/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
/csc-snmp-subagent
/mirror-stats
*.o

49
git_old/snmp/CSC-MIB.txt Normal file
View File

@ -0,0 +1,49 @@
# this file goes at /etc/csc/mibs/CSC-MIB.txt
# and make sure to copy snmp.conf
CSC-MIB DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, Counter32, Gauge32, Counter64,
Integer32, TimeTicks, mib-2, enterprises,
NOTIFICATION-TYPE FROM SNMPv2-SMI
TEXTUAL-CONVENTION, DisplayString,
PhysAddress, TruthValue, RowStatus,
TimeStamp, AutonomousType, TestAndIncr FROM SNMPv2-TC
MODULE-COMPLIANCE, OBJECT-GROUP,
NOTIFICATION-GROUP FROM SNMPv2-CONF
snmpTraps FROM SNMPv2-MIB
IANAifType FROM IANAifType-MIB;
csclub OBJECT IDENTIFIER ::= { enterprises 27934 }
cscMIB MODULE-IDENTITY
LAST-UPDATED "200905080000Z"
ORGANIZATION "University of Waterloo Computer Science Club"
CONTACT-INFO "systems-committee@csclub.uwaterloo.ca"
DESCRIPTION "Computer Science Club Local MIBs"
REVISION "200905080000Z"
DESCRIPTION "Initial revision"
::= { csclub 2 }
mirror OBJECT IDENTIFIER ::= { cscMIB 2 }
cogentBytes OBJECT-TYPE
SYNTAX Counter64
MAX-ACCESS read-only
STATUS current
::= { mirror 1 }
orionBytes OBJECT-TYPE
SYNTAX Counter64
MAX-ACCESS read-only
STATUS current
::= { mirror 2 }
campusBytes OBJECT-TYPE
SYNTAX Counter64
MAX-ACCESS read-only
STATUS current
::= { mirror 3 }
END

11
git_old/snmp/Makefile Normal file
View File

@ -0,0 +1,11 @@
LDFLAGS := -lnl $(shell net-snmp-config --base-agent-libs)
CFLAGS := -g3 -O2 -Wall
all: mirror-stats csc-snmp-subagent
mirror-stats: mirror-stats.o mirror-nl-glue.o
csc-snmp-subagent: csc-snmp-subagent.o mirror-mib.o mirror-nl-glue.o
clean:
rm -f *.o mirror-stats csc-snmp-subagent

View File

@ -0,0 +1,221 @@
/* generated from net-snmp-config */
#include <net-snmp/net-snmp-config.h>
#ifdef HAVE_SIGNAL
#include <signal.h>
#endif /* HAVE_SIGNAL */
#include <net-snmp/net-snmp-includes.h>
#include <net-snmp/agent/net-snmp-agent-includes.h>
#include "mirror-mib.h"
const char *app_name = "cscMIB";
extern int netsnmp_running;
#ifdef __GNUC__
#define UNUSED __attribute__((unused))
#else
#define UNUSED
#endif
RETSIGTYPE
stop_server(UNUSED int a) {
netsnmp_running = 0;
}
static void
usage(const char *prog)
{
fprintf(stderr,
"USAGE: %s [OPTIONS]\n"
"\n"
"OPTIONS:\n", prog);
fprintf(stderr,
" -d\t\t\tdump all traffic\n"
" -D TOKEN[,...]\tturn on debugging output for the specified "
"TOKENs\n"
"\t\t\t (ALL gives extremely verbose debugging output)\n"
" -f\t\t\tDo not fork() from the calling shell.\n"
" -h\t\t\tdisplay this help message\n"
" -H\t\t\tdisplay a list of configuration file directives\n"
" -L LOGOPTS\t\tToggle various defaults controlling logging:\n");
snmp_log_options_usage("\t\t\t ", stderr);
#ifndef DISABLE_MIB_LOADING
fprintf(stderr,
" -m MIB[:...]\t\tload given list of MIBs (ALL loads "
"everything)\n"
" -M DIR[:...]\t\tlook in given list of directories for MIBs\n");
#endif /* DISABLE_MIB_LOADING */
#ifndef DISABLE_MIB_LOADING
fprintf(stderr,
" -P MIBOPTS\t\tToggle various defaults controlling mib "
"parsing:\n");
snmp_mib_toggle_options_usage("\t\t\t ", stderr);
#endif /* DISABLE_MIB_LOADING */
fprintf(stderr,
" -v\t\t\tdisplay package version number\n"
" -x TRANSPORT\tconnect to master agent using TRANSPORT\n");
exit(1);
}
static void
version(void)
{
fprintf(stderr, "NET-SNMP version: %s\n", netsnmp_get_version());
exit(0);
}
int
main (int argc, char **argv)
{
int arg;
char* cp = NULL;
int dont_fork = 0, do_help = 0;
while ((arg = getopt(argc, argv, "dD:fhHL:"
#ifndef DISABLE_MIB_LOADING
"m:M:"
#endif /* DISABLE_MIB_LOADING */
"n:"
#ifndef DISABLE_MIB_LOADING
"P:"
#endif /* DISABLE_MIB_LOADING */
"vx:")) != EOF) {
switch (arg) {
case 'd':
netsnmp_ds_set_boolean(NETSNMP_DS_LIBRARY_ID,
NETSNMP_DS_LIB_DUMP_PACKET, 1);
break;
case 'D':
debug_register_tokens(optarg);
snmp_set_do_debugging(1);
break;
case 'f':
dont_fork = 1;
break;
case 'h':
usage(argv[0]);
break;
case 'H':
do_help = 1;
break;
case 'L':
if (snmp_log_options(optarg, argc, argv) < 0) {
exit(1);
}
break;
#ifndef DISABLE_MIB_LOADING
case 'm':
if (optarg != NULL) {
setenv("MIBS", optarg, 1);
} else {
usage(argv[0]);
}
break;
case 'M':
if (optarg != NULL) {
setenv("MIBDIRS", optarg, 1);
} else {
usage(argv[0]);
}
break;
#endif /* DISABLE_MIB_LOADING */
case 'n':
if (optarg != NULL) {
app_name = optarg;
netsnmp_ds_set_string(NETSNMP_DS_LIBRARY_ID,
NETSNMP_DS_LIB_APPTYPE, app_name);
} else {
usage(argv[0]);
}
break;
#ifndef DISABLE_MIB_LOADING
case 'P':
cp = snmp_mib_toggle_options(optarg);
if (cp != NULL) {
fprintf(stderr, "Unknown parser option to -P: %c.\n", *cp);
usage(argv[0]);
}
break;
#endif /* DISABLE_MIB_LOADING */
case 'v':
version();
break;
case 'x':
if (optarg != NULL) {
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID,
NETSNMP_DS_AGENT_X_SOCKET, optarg);
} else {
usage(argv[0]);
}
break;
default:
fprintf(stderr, "invalid option: -%c\n", arg);
usage(argv[0]);
break;
}
}
if (do_help) {
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID,
NETSNMP_DS_AGENT_NO_ROOT_ACCESS, 1);
} else {
/* we are a subagent */
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID,
NETSNMP_DS_AGENT_ROLE, 1);
if (!dont_fork) {
if (netsnmp_daemonize(1, snmp_stderrlog_status()) != 0)
exit(1);
}
/* initialize tcpip, if necessary */
SOCK_STARTUP;
}
/* initialize the agent library */
init_agent(app_name);
/* initialize your mib code here */
init_mirror_mib();
/* cscMIB will be used to read cscMIB.conf files. */
init_snmp("cscMIB");
if (do_help) {
fprintf(stderr, "Configuration directives understood:\n");
read_config_print_usage(" ");
exit(0);
}
/* In case we received a request to stop (kill -TERM or kill -INT) */
netsnmp_running = 1;
#ifdef SIGTERM
signal(SIGTERM, stop_server);
#endif
#ifdef SIGINT
signal(SIGINT, stop_server);
#endif
/* main loop here... */
while(netsnmp_running) {
agent_check_and_process(1);
}
/* at shutdown time */
snmp_shutdown(app_name);
SOCK_CLEANUP;
exit(0);
}

109
git_old/snmp/mirror-mib.c Normal file
View File

@ -0,0 +1,109 @@
/*
* Note: this file originally auto-generated by mib2c using
* : mib2c.scalar.conf 11805 2005-01-07 09:37:18Z dts12 $
*/
#include <net-snmp/net-snmp-config.h>
#include <net-snmp/net-snmp-includes.h>
#include <net-snmp/agent/net-snmp-agent-includes.h>
#include "mirror-mib.h"
#include "mirror-nl-glue.h"
void
init_mirror_mib(void)
{
static oid cogentBytes_oid[] =
{ 1, 3, 6, 1, 4, 1, 27934, 2, 2, 1 };
static oid orionBytes_oid[] =
{ 1, 3, 6, 1, 4, 1, 27934, 2, 2, 2 };
static oid campusBytes_oid[] =
{ 1, 3, 6, 1, 4, 1, 27934, 2, 2, 3 };
DEBUGMSGTL(("mirror_mib", "Initializing\n"));
mirror_stats_initialize();
netsnmp_register_scalar(netsnmp_create_handler_registration
("cogentBytes", handle_cogentBytes,
cogentBytes_oid, OID_LENGTH(cogentBytes_oid),
HANDLER_CAN_RONLY));
netsnmp_register_scalar(netsnmp_create_handler_registration
("orionBytes", handle_orionBytes,
orionBytes_oid, OID_LENGTH(orionBytes_oid),
HANDLER_CAN_RONLY));
netsnmp_register_scalar(netsnmp_create_handler_registration
("campusBytes", handle_campusBytes,
campusBytes_oid, OID_LENGTH(campusBytes_oid),
HANDLER_CAN_RONLY));
}
void explode_counter64(uint64_t num, struct counter64 *counter) {
counter->low = num & 0xFFFFFFFF;
counter->high = (num >> 32) & 0xFFFFFFFF;
}
int
handle_cogentBytes(netsnmp_mib_handler *handler,
netsnmp_handler_registration *reginfo,
netsnmp_agent_request_info *reqinfo,
netsnmp_request_info *requests)
{
struct counter64 counter;
mirror_stats_refresh();
explode_counter64(get_class_byte_count(&cogent_class), &counter);
switch (reqinfo->mode) {
case MODE_GET:
snmp_set_var_typed_value(requests->requestvb, ASN_COUNTER64,
(u_char *)&counter, sizeof(counter));
break;
default:
die("unknown mode");
}
return SNMP_ERR_NOERROR;
}
int
handle_orionBytes(netsnmp_mib_handler *handler,
netsnmp_handler_registration *reginfo,
netsnmp_agent_request_info *reqinfo,
netsnmp_request_info *requests)
{
struct counter64 counter;
mirror_stats_refresh();
explode_counter64(get_class_byte_count(&orion_class), &counter);
switch (reqinfo->mode) {
case MODE_GET:
snmp_set_var_typed_value(requests->requestvb, ASN_COUNTER64,
(u_char *)&counter, sizeof(counter));
break;
default:
die("unknown mode");
}
return SNMP_ERR_NOERROR;
}
int
handle_campusBytes(netsnmp_mib_handler *handler,
netsnmp_handler_registration *reginfo,
netsnmp_agent_request_info *reqinfo,
netsnmp_request_info *requests)
{
struct counter64 counter;
mirror_stats_refresh();
explode_counter64(get_class_byte_count(&campus_class), &counter);
switch (reqinfo->mode) {
case MODE_GET:
snmp_set_var_typed_value(requests->requestvb, ASN_COUNTER64,
(u_char *)&counter, sizeof(counter));
break;
default:
die("unknown mode");
}
return SNMP_ERR_NOERROR;
}

View File

@ -0,0 +1,9 @@
#ifndef MIRRORMIB_H
#define MIRRORMIB_H
void init_mirror_mib(void);
Netsnmp_Node_Handler handle_cogentBytes;
Netsnmp_Node_Handler handle_orionBytes;
Netsnmp_Node_Handler handle_campusBytes;
#endif

View File

@ -0,0 +1,102 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include <libgen.h>
#include <netlink/route/class.h>
#include <netlink/route/link.h>
#include <netlink/cache-api.h>
#include <netlink/object.h>
#include "mirror-nl-glue.h"
static struct nl_cache *link_cache, *class_cache;
static struct rtnl_link *eth;
static int ifindex;
struct class_info cogent_class = { "cogent", "01:02", };
struct class_info orion_class = { "orion", "01:03", };
struct class_info campus_class = { "campus", "01:04", };
static struct nl_handle *nl_handle;
void die(const char *message) {
fprintf(stderr, "fatal: %s\n", message);
exit(1);
}
static void match_obj(struct nl_object *obj, void *arg) {
struct nl_object *needle = *(struct nl_object **)arg;
struct nl_object **ret = (struct nl_object **)arg + 1;
if (!*ret && nl_object_identical(obj, needle)) {
nl_object_get(obj);
*ret = obj;
}
}
static struct rtnl_class *get_class_by_id(char *id, int ifindex) {
uint32_t handle;
struct rtnl_class *needle;
struct nl_object *magic[2];
if (rtnl_tc_str2handle(id, &handle))
die("invalid id");
needle = rtnl_class_alloc();
rtnl_class_set_ifindex(needle, ifindex);
rtnl_class_set_handle(needle, handle);
magic[0] = (struct nl_object *)needle;
magic[1] = (struct nl_object *)NULL;
nl_cache_foreach(class_cache, match_obj, magic);
rtnl_class_put(needle);
return (struct rtnl_class *)magic[1];
}
uint64_t get_class_byte_count(struct class_info *info) {
struct rtnl_class *class = get_class_by_id(info->id, ifindex);
uint64_t bytes;
if (!class)
die("class not found");
bytes = rtnl_class_get_stat(class, RTNL_TC_BYTES);
rtnl_class_put(class);
return bytes;
}
void mirror_stats_refresh(void) {
nl_cache_refill(nl_handle, class_cache);
}
void mirror_stats_initialize(void) {
nl_handle = nl_handle_alloc();
if (!nl_handle)
die("unable to allocate handle");
if (nl_connect(nl_handle, NETLINK_ROUTE) < 0)
die("unable to connect to netlink");
link_cache = rtnl_link_alloc_cache(nl_handle);
if (!link_cache)
die("unable to allocate link cache");
eth = rtnl_link_get_by_name(link_cache, "eth0");
if (!eth)
die("unable to acquire eth0");
ifindex = rtnl_link_get_ifindex(eth);
class_cache = rtnl_class_alloc_cache(nl_handle, ifindex);
if (!class_cache)
die("unable to allocate class cache");
}
void mirror_stats_cleanup(void) {
rtnl_link_put(eth);
nl_cache_free(class_cache);
nl_cache_free(link_cache);
nl_close(nl_handle);
nl_handle_destroy(nl_handle);
}

View File

@ -0,0 +1,25 @@
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include <libgen.h>
#include <netlink/route/class.h>
#include <netlink/route/link.h>
#include <netlink/cache-api.h>
#include <netlink/object.h>
struct class_info {
char *name;
char *id;
};
extern struct class_info cogent_class;
extern struct class_info orion_class;
extern struct class_info campus_class;
void mirror_stats_refresh(void);
void mirror_stats_initialize(void);
void mirror_stats_cleanup(void);
void die(const char *);
uint64_t get_class_byte_count(struct class_info *);

View File

@ -0,0 +1,12 @@
#include "mirror-nl-glue.h"
int main(int argc, char *argv[]) {
mirror_stats_initialize();
for (;;) {
printf("%s %"PRIu64"\n", cogent_class.id, get_class_byte_count(&cogent_class));
printf("%s %"PRIu64"\n", orion_class.id, get_class_byte_count(&orion_class));
printf("%s %"PRIu64"\n", campus_class.id, get_class_byte_count(&campus_class));
sleep(1);
mirror_stats_refresh();
}
}

View File

@ -0,0 +1 @@
snmpwalk -v2c -cpublic mirror 1.3.6.1.4.1.27934.2.2

2
git_old/snmp/snmp.conf Normal file
View File

@ -0,0 +1,2 @@
mibdirs /etc/csc/mibs:/usr/share/snmp/mibs
mibs ALL

View File

@ -0,0 +1,56 @@
#!/bin/sh
. /lib/lsb/init-functions
PATH=$PATH:/bin:/usr/bin:/sbin:/usr/sbin
NAME=rtorrent
PIDFILE=/var/run/$NAME.screen
CHUSER=$NAME
DAEMON=/usr/bin/rtorrent
DAEMON_ARGS="-n -o try_import=/etc/rtorrent.rc"
do_start()
{
if [ -s $PIDFILE ] && kill -0 $(cat $PIDFILE) >/dev/null 2>&1; then
exit 0
fi
log_daemon_msg "Starting" $NAME
start-stop-daemon --start --quiet --background --pidfile $PIDFILE \
--make-pidfile --exec /bin/su -- \
$CHUSER -c "/usr/bin/screen -D -m -- $DAEMON $DAEMON_ARGS"
log_end_msg 0
}
do_stop()
{
log_daemon_msg "Stopping" $NAME
start-stop-daemon --stop --quiet --pidfile $PIDFILE --oknodo
log_end_msg 0
}
do_status()
{
if [ -s $PIDFILE ] && kill -0 $(cat $PIDFILE) >/dev/null 2>&1; then
exit 0
else
exit 4
fi
}
case "$1" in
start)
do_start
;;
stop)
do_stop
;;
restart)
do_stop
sleep 4
do_start
;;
status)
do_status
esac
exit 0

View File

@ -0,0 +1,100 @@
# This is an example resource file for rTorrent. Copy to
# ~/.rtorrent.rc and enable/modify the options as needed. Remember to
# uncomment the options you wish to enable.
# Maximum and minimum number of peers to connect to per torrent.
#min_peers = 40
#max_peers = 100
# Same as above but for seeding completed torrents (-1 = same as downloading)
#min_peers_seed = 10
#max_peers_seed = 50
# Maximum number of simultanious uploads per torrent.
#max_uploads = 15
# Global upload and download rate in KiB. "0" for unlimited.
#download_rate = 0
#upload_rate = 0
# Default directory to save the downloaded torrents.
directory = /mirror/root/csclub
# Default session directory. Make sure you don't run multiple instance
# of rtorrent using the same session directory. Perhaps using a
# relative path?
session = /var/lib/rtorrent/session
# Watch a directory for new torrents, and stop those that have been
# deleted.
schedule = watch_www_directory,1,5,load_start=/mirror/root/csclub/*.torrent
schedule = untied_directory,5,5,remove_untied=
# Close torrents when diskspace is low.
#schedule = low_diskspace,5,60,close_low_diskspace=100M
# Stop torrents when reaching upload ratio in percent,
# when also reaching total upload in bytes, or when
# reaching final upload ratio in percent.
# example: stop at ratio 2.0 with at least 200 MB uploaded, or else ratio 20.0
#schedule = ratio,60,60,stop_on_ratio=200,200M,2000
# The ip address reported to the tracker.
#ip = 127.0.0.1
#ip = rakshasa.no
# The ip address the listening socket and outgoing connections is
# bound to.
bind = mirror
# Port range to use for listening.
port_range = 6900-6999
# Start opening ports at a random position within the port range.
#port_random = no
# Check hash for finished torrents. Might be usefull until the bug is
# fixed that causes lack of diskspace not to be properly reported.
#check_hash = no
# Set whetever the client should try to connect to UDP trackers.
#use_udp_trackers = yes
# Alternative calls to bind and ip that should handle dynamic ip's.
#schedule = ip_tick,0,1800,ip=rakshasa
#schedule = bind_tick,0,1800,bind=rakshasa
encryption = allow_incoming,prefer_plaintext
#
# Do not modify the following parameters unless you know what you're doing.
#
# Hash read-ahead controls how many MB to request the kernel to read
# ahead. If the value is too low the disk may not be fully utilized,
# while if too high the kernel might not be able to keep the read
# pages in memory thus end up trashing.
#hash_read_ahead = 10
# Interval between attempts to check the hash, in milliseconds.
#hash_interval = 100
# Number of attempts to check the hash while using the mincore status,
# before forcing. Overworked systems might need lower values to get a
# decent hash checking rate.
#hash_max_tries = 10
# Max number of files to keep open simultaniously.
#max_open_files = 128
# Number of sockets to simultaneously keep open.
#max_open_sockets = <no default>
# Example of scheduling commands: Switch between two ip's every 5
# seconds.
#schedule = "ip_tick1,5,10,ip=torretta"
#schedule = "ip_tick2,10,10,ip=lampedusa"
# Remove a scheduled event.
#schedule_remove = "ip_tick1"

599
index.html__ Normal file
View File

@ -0,0 +1,599 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="stylesheet" type="text/css" href="index.css" />
<title>Computer Science Club Mirror</title>
</head>
<body>
<div id="logo">
<a href="/"><img src="/include/header.png" alt="Computer Science Club Mirror - The University of Waterloo - Funded by MEF" title="Computer Science Club Mirror - The University of Waterloo - Funded by MEF" /></a>
</div>
We experienced a recent power outage which caused the RAID array hosting mirror data to need to be rebuilt. As a result, the mirrored data is temporarily read-only. We hope to resume synchronizing as soon as possible.
We apologize for any inconvenience this may cause you. <br />
<div id="raid-progress"></div>
<br />
<div id="listing">
<table>
<tr><th>Directory</th><th>Project Site</th><th>Size</th></tr>
<tr>
<td>
<a href="/apache/">apache/</a>
</td>
<td>
<a href="http://www.apache.org/">apache.org</a>
</td>
<td>32 GB</td>
</tr>
<tr>
<td>
<a href="/archlinux/">archlinux/</a>
</td>
<td>
<a href="http://www.archlinux.org/">archlinux.org</a>
</td>
<td>35 GB</td>
</tr>
<tr>
<td>
<a href="/blastwave/">blastwave/</a>
</td>
<td>
<a href="http://www.blastwave.org/">blastwave.org</a>
</td>
<td>14 GB</td>
</tr>
<tr>
<td>
<a href="/centos/">centos/</a>
</td>
<td>
<a href="http://www.centos.org/">centos.org</a>
</td>
<td>107 GB</td>
</tr>
<tr>
<td>
<a href="/CPAN/">CPAN/</a>
</td>
<td>
<a href="http://www.cpan.org/">cpan.org</a>
</td>
<td>7.6 GB</td>
</tr>
<tr>
<td>
<a href="/CRAN/">CRAN/</a>
</td>
<td>
<a href="http://cran.r-project.org/">r-project.org</a>
</td>
<td>54 GB</td>
</tr>
<tr>
<td>
<a href="/csclub/">csclub/</a>
</td>
<td>
<a href="http://csclub.uwaterloo.ca/media/">csclub.uwaterloo.ca</a>
</td>
<td>65 GB</td>
</tr>
<tr>
<td>
<a href="/CTAN/">CTAN/</a>
</td>
<td>
<a href="http://www.ctan.org/">ctan.org</a>
</td>
<td>19 GB</td>
</tr>
<tr>
<td>
<a href="/cygwin/">cygwin/</a>
</td>
<td>
<a href="http://www.cygwin.com/">cygwin.com</a>
</td>
<td>9.7 GB</td>
</tr>
<tr>
<td>
<a href="/damnsmalllinux/">damnsmalllinux/</a>
</td>
<td>
<a href="http://www.damnsmalllinux.org/">damnsmalllinux.org</a>
</td>
<td>18 GB</td>
</tr>
<tr>
<td>
<a href="/debian/">debian/</a>
</td>
<td>
<a href="http://www.debian.org/">debian.org</a>
</td>
<td>424 GB</td>
</tr>
<tr>
<td>
<a href="/debian-backports/">debian-backports/</a>
</td>
<td>
<a href="http://backports.debian.org/">backports.debian.org</a>
</td>
<td>34 GB</td>
</tr>
<tr>
<td>
<a href="/debian-cd/">debian-cd/</a>
</td>
<td>
<a href="http://www.debian.org/CD/">debian.org</a>
</td>
<td>77 GB</td>
</tr>
<tr>
<td>
<a href="/debian-multimedia/">debian-multimedia/</a>
</td>
<td>
<a href="http://www.debian-multimedia.org/">debian-multimedia.org</a>
</td>
<td>5.0 GB</td>
</tr>
<tr>
<td>
<a href="/debian-ports/">debian-ports/</a>
</td>
<td>
<a href="http://www.debian-ports.org/">debian-ports.org</a>
</td>
<td>63 GB</td>
</tr>
<tr>
<td>
<a href="/debian-security/">debian-security/</a>
</td>
<td>
<a href="http://www.debian.org/security/">debian.org</a>
</td>
<td>45 GB</td>
</tr>
<tr>
<td>
<a href="/debian-unofficial/">debian-unofficial/</a>
</td>
<td>
<a href="http://unofficial.debian-maintainers.org/">debian-maintainers.org</a>
</td>
<td>468 MB</td>
</tr>
<tr>
<td>
<a href="/debian-volatile/">debian-volatile/</a>
</td>
<td>
<a href="http://www.debian.org/volatile/">debian.org</a>
</td>
<td>2.5 GB</td>
</tr>
<tr>
<td>
<a href="/eclipse/">eclipse/</a>
</td>
<td>
<a href="http://www.eclipse.org/">eclipse.org</a>
</td>
<td>166 GB</td>
</tr>
<tr>
<td>
<a href="/emdebian/">emdebian/</a>
</td>
<td>
<a href="http://www.emdebian.org/">emdebian.org</a>
</td>
<td>2.8 GB</td>
</tr>
<tr>
<td>
<a href="/fedora/">fedora/</a>
</td>
<td>
<a href="http://www.fedoraproject.org/">fedoraproject.org</a>
</td>
<td>736 GB</td>
</tr>
<tr>
<td>
<a href="/FreeBSD/">FreeBSD/</a>
</td>
<td>
<a href="http://www.freebsd.org/">freebsd.org</a>
</td>
<td>1.9 TB</td>
</tr>
<tr>
<td>
<a href="/gentoo-distfiles/">gentoo-distfiles/</a>
</td>
<td>
<a href="http://www.gentoo.org/">gentoo.org</a>
</td>
<td>173 GB</td>
</tr>
<tr>
<td>
<a href="/gentoo-portage/">gentoo-portage/</a>
</td>
<td>
<a href="http://www.gentoo.org/">gentoo.org</a>
</td>
<td>612 MB</td>
</tr>
<tr>
<td>
<a href="/gnome/">gnome/</a>
</td>
<td>
<a href="http://www.gnome.org/">gnome.org</a>
</td>
<td>102 GB</td>
</tr>
<tr>
<td>
<a href="/gnu/">gnu/</a>
</td>
<td>
<a href="http://www.gnu.org/">gnu.org</a>
</td>
<td>24 GB</td>
</tr>
<tr>
<td>
<a href="/gutenberg/">gutenberg/</a>
</td>
<td>
<a href="http://www.gutenberg.org">gutenberg.org</a>
</td>
<td>511 GB</td>
</tr>
<tr>
<td>
<a href="/kde/">kde/</a>
</td>
<td>
<a href="http://www.kde.org/">kde.org</a>
</td>
<td>46 GB</td>
</tr>
<tr>
<td>
<a href="/kernel.org/">kernel.org/</a>
</td>
<td>
<a href="http://www.kernel.org/">kernel.org</a>
</td>
<td>149 GB</td>
</tr>
<tr>
<td>
<a href="/linuxmint/">linuxmint/</a>
</td>
<td>
<a href="http://www.linuxmint.com/">linuxmint.com</a>
</td>
<td>50 GB</td>
</tr>
<tr>
<td>
<a href="/linuxmint-packages/">linuxmint-packages/</a>
</td>
<td>
<a href="http://www.linuxmint.com/">linuxmint.com</a>
</td>
<td>5.3 GB</td>
</tr>
<tr>
<td>
<a href="/mozdev/">mozdev/</a>
</td>
<td>
<a href="http://www.mozdev.org/">mozdev.org</a>
</td>
<td>6.3 GB</td>
</tr>
<tr>
<td>
<a href="/mozilla.org/">mozilla.org/</a>
</td>
<td>
<a href="http://www.mozilla.org/">mozilla.org</a>
</td>
<td>165 GB</td>
</tr>
<tr>
<td>
<a href="/mysql/">mysql/</a>
</td>
<td>
<a href="http://www.mysql.com/">mysql.com</a>
</td>
<td>243 GB</td>
</tr>
<tr>
<td>
<a href="/nongnu/">nongnu/</a>
</td>
<td>
<a href="http://savannah.nongnu.org/">nongnu.org</a>
</td>
<td>18 GB</td>
</tr>
<tr>
<td>
<a href="/OpenBSD/">OpenBSD/</a>
</td>
<td>
<a href="http://www.openbsd.org/">openbsd.org</a>
</td>
<td>240 GB</td>
</tr>
<tr>
<td>
<a href="/openoffice/">openoffice/</a>
</td>
<td>
<a href="http://www.openoffice.org/">openoffice.org</a>
</td>
<td>125 GB</td>
</tr>
<tr>
<td>
<a href="/opensuse/">opensuse/</a>
</td>
<td>
<a href="http://www.opensuse.org/">opensuse.org</a>
</td>
<td>202 GB</td>
</tr>
<tr>
<td>
<a href="/racket/">racket/</a>
</td>
<td>
<a href="http://racket-lang.org/">racket-lang.org</a>
</td>
<td>11 GB</td>
</tr>
<tr>
<td>
<a href="/slackware/">slackware/</a>
</td>
<td>
<a href="http://www.slackware.com/">slackware.com</a>
</td>
<td>141 GB</td>
</tr>
<tr>
<td>
<a href="/sunfreeware/">sunfreeware/</a>
</td>
<td>
<a href="http://www.sunfreeware.com/">sunfreeware.com</a>
</td>
<td>80 GB</td>
</tr>
<tr>
<td>
<a href="/ubuntu/">ubuntu/</a>
</td>
<td>
<a href="http://www.ubuntu.com/">ubuntu.com</a>
</td>
<td>392 GB</td>
</tr>
<tr>
<td>
<a href="/ubuntu-ports/">ubuntu-ports/</a>
</td>
<td>
<a href="http://ports.ubuntu.com/ubuntu-ports/">ports.ubuntu.com</a>
</td>
<td>460 GB</td>
</tr>
<tr>
<td>
<a href="/ubuntu-ports-releases/">ubuntu-ports-releases/</a>
</td>
<td>
<a href="http://cdimage.ubuntu.com/ports/releases/">ports.ubuntu.com</a>
</td>
<td>40 GB</td>
</tr>
<tr>
<td>
<a href="/ubuntu-releases/">ubuntu-releases/</a>
</td>
<td>
<a href="http://releases.ubuntu.com/">releases.ubuntu.com</a>
</td>
<td>48 GB</td>
</tr>
<tr>
<td>
<a href="/x.org/">x.org/</a>
</td>
<td>
<a href="http://www.x.org/">x.org</a>
</td>
<td>5.7 GB</td>
</tr>
<tr>
<td>
<a href="/xubuntu-releases/">xubuntu-releases/</a>
</td>
<td>
<a href="http://www.xubuntu.org/">xubuntu.org</a>
</td>
<td>24 GB</td>
</tr>
<tr class="total">
<td>Total</td>
<td></td>
<td>6.9 TB</td>
</tr>
</table>
</div>
<div id="footer">
<p>This service is run by the <a href="http://csclub.uwaterloo.ca/">Computer Science Club of the University of Waterloo</a>.<br />It is made possible by funding from the <a href="http://www.student.math.uwaterloo.ca/~mefcom/home">Mathematics Endowment Fund</a><br />and support from the <a href="http://www.cs.uwaterloo.ca">David R. Cheriton School of Computer Science</a>.</p>
</div>
</body>
</html>

1
merlin

@ -1 +0,0 @@
Subproject commit 3b8607ff1a97e77c0dc60c8c9b85d1f593527a53

7
merlin/.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
*.pyc
/logs
/logs.*
/rebuild_logs
/stamps
merlin.sock

25
merlin/arthur.py Executable file
View File

@ -0,0 +1,25 @@
#!/usr/bin/python2
import socket, sys
try:
command = sys.argv[1]
except:
print("bad command")
sys.exit(1)
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect('/home/mirror/merlin/merlin.sock')
s.send(command)
s.shutdown(socket.SHUT_WR)
response = ''
while True:
data = s.recv(4096)
if not data:
break
response = response + data
s.close()
if command=="status":
print(response)
else:
print(response)

55
merlin/init-script Executable file
View File

@ -0,0 +1,55 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: merlin
# Required-Start: $remote_fs $syslog $network
# Required-Stop: $remote_fs $syslog $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Merlin
### END INIT INFO
set -e
test -f /home/mirror/merlin/merlin.py || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting Merlin" "merlin"
if start-stop-daemon --start --make-pidfile --background --quiet --oknodo --chuid mirror --pidfile /var/run/merlin.pid --chdir /home/mirror/merlin --exec /usr/bin/python -- merlin.py -d; then
log_end_msg 0
else
log_end_msg 1
fi
;;
stop)
log_daemon_msg "Stopping Merlin" "merlin"
if start-stop-daemon --stop --quiet --oknodo --user mirror --pidfile /var/run/merlin.pid; then
log_end_msg 0
else
log_end_msg 1
fi
;;
restart|force-reload)
log_daemon_msg "Restarting Merlin" "merlin"
start-stop-daemon --stop --quiet --oknodo --user mirror --retry 30 --pidfile /var/run/merlin.pid
if start-stop-daemon --start --make-pidfile --background --quiet --oknodo --chuid mirror --pidfile /var/run/merlin.pid --chdir /home/mirror/merlin --exec /usr/bin/python -- merlin.py -d; then
log_end_msg 0
else
log_end_msg 1
fi
;;
status)
status_of_proc -p /var/run/merlin.pid /usr/bin/python merlin && exit 0 || exit $?
;;
*)
log_action_msg "Usage: /etc/init.d/merlin {start|stop|force-reload|restart|status}"
exit 1
esac
exit 0

716
merlin/merlin.py Executable file
View File

@ -0,0 +1,716 @@
#!/usr/bin/python2
import time, sys, os, errno, logging, signal, copy, select, socket, grp
daily = 86400
twice_daily = 86400 / 2
hourly = 3600
bi_hourly = 7200
tri_hourly = 10800
twice_hourly = 1800
ten_minutely = 600
five_minutely = 300
maxtime = 86400
mintime = 60
jobs = {}
MAX_JOBS = 6
#earPath = '/home/mirror/merlin/merlin.sock'
earPath = '/mirror/merlin/run/merlin.sock'
cmd_buf_size = 4096
repos = {
'debian': {
'command': '~/bin/csc-sync-debian debian debian.mirror.rafal.ca debian',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
# 'debian-cdimage': {
# 'command': '~/bin/csc-sync-cdimage debian-cdimage cdimage.debian.org cdimage',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'ubuntu': {
'command': '~/bin/csc-sync-debian ubuntu archive.ubuntu.com ubuntu drescher.canonical.com',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'ubuntu-ports': {
'command': '~/bin/csc-sync-debian ubuntu-ports ports.ubuntu.com ubuntu-ports drescher.canonical.com',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'linuxmint-packages': {
'command': '~/bin/csc-sync-debian linuxmint-packages rsync-packages.linuxmint.com packages',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'debian-multimedia': {
'command': '~/bin/csc-sync-debian debian-multimedia www.deb-multimedia.org deb',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'debian-backports': {
'command': '~/bin/csc-sync-debian debian-backports debian.mirror.rafal.ca debian-backports',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
# 'debian-volatile': {
# 'command': '~/bin/csc-sync-debian debian-volatile debian.mirror.rafal.ca debian-volatile',
# 'interval': bi_hourly,
# 'max-sync-time': maxtime,
# },
'debian-security': {
'command': '~/bin/csc-sync-debian debian-security rsync.security.debian.org debian-security security-master.debian.org',
'interval': twice_hourly,
'max-sync-time': maxtime,
},
'ubuntu-releases': {
'command': '~/bin/csc-sync-standard ubuntu-releases rsync.releases.ubuntu.com releases',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'xubuntu-releases': {
'command': '~/bin/csc-sync-standard xubuntu-releases cdimage.ubuntu.com cdimage/xubuntu/releases/',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
# 'emdebian': {
# 'command': '~/bin/csc-sync-badperms emdebian www.emdebian.org debian',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'puppylinux': {
'command': '~/bin/csc-sync-standard puppylinux distro.ibiblio.org puppylinux',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'CPAN': {
'command': '~/bin/csc-sync-standard CPAN cpan-rsync.perl.org CPAN',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'CRAN': {
'command': '~/bin/csc-sync-ssh CRAN cran.r-project.org "" cran-rsync ~/.ssh/id_cran_rsa',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'CTAN': {
'command': '~/bin/csc-sync-standard CTAN rsync.dante.ctan.org CTAN',
'interval': twice_daily,
'max-sync-time': maxtime,
},
# 'openoffice': {
# 'command': '~/bin/csc-sync-standard openoffice rsync.services.openoffice.org openoffice-extended',
# 'command': '~/bin/csc-sync-standard openoffice ftp.snt.utwente.nl openoffice-extended',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'fedora-epel': {
'command': '~/bin/csc-sync-standard fedora/epel mirrors.kernel.org fedora-epel && ~/bin/report_mirror >/dev/null',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'cygwin': {
'command': '~/bin/csc-sync-standard cygwin cygwin.com cygwin-ftp',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'gnu': {
#'command': '~/bin/csc-sync-standard gnu mirrors.ibiblio.org gnuftp/gnu/',
'command': '~/bin/csc-sync-standard gnu ftp.gnu.org gnu',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'nongnu': {
# 'command': '~/bin/csc-sync-standard nongnu dl.sv.gnu.org releases --ignore-errors',
'command': '~/bin/csc-sync-standard nongnu dl.sv.gnu.org releases',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'mysql': {
#'command': '~/bin/csc-sync-standard mysql mysql.he.net mysql',
'command': '~/bin/csc-sync-standard mysql rsync.mirrorservice.org ftp.mysql.com',
'interval': twice_daily,
'max-sync-time': maxtime,
},
# No longer syncs, and no longer really relevant
# 'mozdev': {
# 'command': '~/bin/csc-sync-standard mozdev rsync.mozdev.org mozdev',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'gnome': {
'command': '~/bin/csc-sync-standard gnome master.gnome.org gnomeftp gnome',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'damnsmalllinux': {
'command': '~/bin/csc-sync-standard damnsmalllinux ftp.heanet.ie mirrors/damnsmalllinux.org/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'linuxmint': {
'command': '~/bin/csc-sync-standard linuxmint pub.linuxmint.com pub',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'kernel.org-linux': {
'command': '~/bin/csc-sync-standard kernel.org/linux rsync.kernel.org pub/linux/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'kernel.org-software': {
'command': '~/bin/csc-sync-standard kernel.org/software rsync.kernel.org pub/software/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'apache': {
'command': '~/bin/csc-sync-apache apache rsync.us.apache.org apache-dist',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'eclipse': {
'command': '~/bin/csc-sync-standard eclipse download.eclipse.org eclipseMirror',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'kde': {
'command': '~/bin/csc-sync-standard kde rsync.kde.org kdeftp',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'kde-applicationdata': {
'command': '~/bin/csc-sync-standard kde-applicationdata rsync.kde.org applicationdata',
'interval': twice_daily,
'max-sync-time': maxtime,
},
# We are a Tier 1 arch mirror (https://bugs.archlinux.org/task/52853)
# so our IP is important.
'archlinux': {
#'command': '~/bin/csc-sync-standard archlinux archlinux.mirror.rafal.ca archlinux',
'command': '~/bin/csc-sync-archlinux archlinux',
'interval': five_minutely,
'max-sync-time': maxtime,
},
'debian-ports': {
'command': '~/bin/csc-sync-standard debian-ports ftp.de.debian.org debian-ports',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'slackware': {
'command': '~/bin/csc-sync-standard slackware slackware.cs.utah.edu slackware',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'debian-cd': {
'command': '~/bin/csc-sync-debian-cd',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'x.org': {
#'command': '~/bin/csc-sync-standard x.org xorg.freedesktop.org xorg-archive',
#'command': '~/bin/csc-sync-standard x.org mirror.us.leaseweb.net xorg',
'command': '~/bin/csc-sync-standard x.org rsync.mirrorservice.org ftp.x.org/pub',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'centos': {
'command': '~/bin/csc-sync-standard centos us-msync.centos.org CentOS',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'opensuse': {
'command': '~/bin/csc-sync-standard opensuse stage.opensuse.org opensuse-full/opensuse/ #"--exclude distribution/.timestamp_invisible"',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'FreeBSD': {
# Has not updated since at least June 2018
#'command': '~/bin/csc-sync-standard FreeBSD ftp10.us.freebsd.org FreeBSD',
'command': '~/bin/csc-sync-standard FreeBSD ftp2.uk.freebsd.org ftp.freebsd.org/pub/FreeBSD/',
#'command': '~/bin/csc-sync-standard FreeBSD ftp3.us.freebsd.org FreeBSD/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'fedora-enchilada': {
# 'command': '~/bin/csc-sync-standard fedora/linux mirrors.kernel.org fedora-enchilada/linux/ --ignore-errors && ~/bin/report_mirror >/dev/null',
'command': '~/bin/csc-sync-standard fedora/linux mirrors.kernel.org fedora-enchilada/linux/ && ~/bin/report_mirror >/dev/null',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'ubuntu-ports-releases': {
'command': '~/bin/csc-sync-standard ubuntu-ports-releases cdimage.ubuntu.com cdimage/releases/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'gentoo-distfiles': {
'command': '~/bin/csc-sync-gentoo',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'gentoo-portage': {
'command': '~/bin/csc-sync-standard gentoo-portage rsync1.us.gentoo.org gentoo-portage',
'interval': twice_hourly,
'max-sync-time': maxtime,
},
# This project is no longer available for mirroring
# https://bugzilla.mozilla.org/show_bug.cgi?id=807543
#'mozilla.org': {
# 'command': '~/bin/csc-sync-standard mozilla.org releases-rsync.mozilla.org mozilla-releases',
# 'interval': twice_hourly,
# 'max-sync-time': maxtime,
#},
'gutenberg': {
'command': '~/bin/csc-sync-standard gutenberg ftp@ftp.ibiblio.org gutenberg',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'racket-installers': {
'command': '~/bin/csc-sync-wget racket/racket-installers https://mirror.racket-lang.org/installers/ 1',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'plt-bundles': {
'command': '~/bin/csc-sync-standard racket/plt-bundles mirror.racket-lang.org plt-bundles',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'OpenBSD': {
'command': '~/bin/csc-sync-standard OpenBSD ftp3.usa.openbsd.org ftp',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'xiph': {
#'command': '~/bin/csc-sync-standard xiph downloads.xiph.org xiph/releases',
'command': '~/bin/csc-sync-standard xiph ftp.osuosl.org xiph',
'interval': twice_daily,
'max-sync-time': maxtime,
},
# We currently don't have the disk space
'netbsd': {
'command': '~/bin/csc-sync-standard NetBSD rsync.netbsd.org NetBSD',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'netbsd-pkgsrc': {
'command': '~/bin/csc-sync-standard pkgsrc rsync.netbsd.org pkgsrc',
#'command': '~/bin/csc-sync-standard pkgsrc rsync3.jp.netbsd.org pub/pkgsrc/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'macports-release': {
'command': '~/bin/csc-sync-standard MacPorts/release rsync.macports.org macports/release/',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'macports-distfiles': {
'command': '~/bin/csc-sync-standard MacPorts/mpdistfiles rsync.macports.org macports/distfiles/',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
# 'raspberrypi': {
# 'command': '~/bin/csc-sync-standard raspberrypi mirrors.rit.edu rpi',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'sagemath': {
#'command': '~/bin/csc-sync-standard sage mirror.clibre.uqam.ca sage',
'command': '~/bin/csc-sync-standard sage rsync.sagemath.org sage',
'interval': twice_daily,
'max-sync-time': maxtime,
},
# 'cs136': {
# 'command': '~/bin/csc-sync-ssh uw-coursewear/cs136 linux024.student.cs.uwaterloo.ca /u/cs136/mirror.uwaterloo.ca csc01 ~/.ssh/id_rsa_csc01',
# 'interval': hourly,
# 'max-sync-time': maxtime,
# },
'vlc': {
'command': '~/bin/csc-sync-standard vlc rsync.videolan.org videolan-ftp',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'qtproject': {
'command': '~/bin/csc-sync-standard qtproject master.qt.io qt-all',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'tdf': {
'command': '~/bin/csc-sync-standard tdf rsync.documentfoundation.org tdf-pub',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'saltstack': {
'command': '~/bin/csc-sync-s3 saltstack https://s3.repo.saltproject.io',
'interval': daily,
'max-sync-time': maxtime,
},
# 'kali': {
# 'command': '~/bin/csc-sync-standard kali kali.mirror.globo.tech kali',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
# 'kali-images': {
# 'command': '~/bin/csc-sync-standard kali-images kali.mirror.globo.tech kali-images',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
# },
'alpine': {
'command': '~/bin/csc-sync-standard alpine rsync.alpinelinux.org alpine',
'interval': hourly,
'max-sync-time': maxtime,
},
'raspbian': {
'command': '~/bin/csc-sync-standard raspbian archive.raspbian.org archive',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'raspberrypi': {
'command': '~/bin/csc-sync-standard-ipv6 raspberrypi apt-repo.raspberrypi.org archive',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'ipfire': {
'command': '~/bin/csc-sync-standard ipfire rsync.ipfire.org full',
'interval': hourly,
'max-sync-time': maxtime,
},
'manjaro': {
'command': '~/bin/csc-sync-standard manjaro mirrorservice.org repo.manjaro.org/repos/',
'interval': hourly,
'max-sync-time': maxtime,
},
'scientific': {
'command': '~/bin/csc-sync-standard scientific rsync.scientificlinux.org scientific',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'mxlinux': {
'command': '~/bin/csc-sync-standard mxlinux mirror.math.princeton.edu pub/mxlinux/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'mxlinux-iso': {
'command': '~/bin/csc-sync-standard mxlinux-iso mirror.math.princeton.edu pub/mxlinux-iso/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'parabola': {
'command': '~/bin/csc-sync-standard parabola repo.parabola.nu:875 repos/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
#'hyperbola-sources': {
# 'command': '~/bin/csc-sync-chmod hyperbola/sources repo.hyperbola.info:52000 repo/',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
#},
#'hyperbola-stable': {
# 'command': '~/bin/csc-sync-chmod hyperbola/gnu-plus-linux-libre/stable repo.hyperbola.info:52012 repo/',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
#},
#'hyperbola-testing': {
# 'command': '~/bin/csc-sync-chmod hyperbola/gnu-plus-linux-libre/testing repo.hyperbola.info:52011 repo/',
# 'interval': twice_daily,
# 'max-sync-time': maxtime,
#},
'trisquel-packages': {
'command': '~/bin/csc-sync-standard trisquel/packages rsync.trisquel.info trisquel.packages/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'trisquel-iso': {
'command': '~/bin/csc-sync-standard trisquel/iso rsync.trisquel.info trisquel.iso/',
'interval': twice_daily,
'max-sync-time': maxtime,
},
'almalinux': {
'command': '~/bin/csc-sync-standard almalinux rsync.repo.almalinux.org almalinux/',
'interval': bi_hourly,
'max-sync-time': maxtime,
},
'ceph': {
'command': '~/bin/csc-sync-ceph -q -s global -t ceph',
'interval': tri_hourly,
'max-sync-time': maxtime,
},
}
def mirror_status():
out = []
for x in repos:
repository = repos[x]
last_attempt = repository['last-attempt']
next_attempt = repository['last-attempt'] + repository['interval']
out.append( [x, last_attempt, next_attempt] )
out.sort(key= lambda x: x[2])
#turn floating point time values into human readable strings
for x in out:
for y in (1,2):
x[y] = time.ctime(x[y])
out.insert(0,['Repository', 'Last Synced', 'Next Expected Sync'])
#calculate maximum width of each column
widths = []
i = 0
while True:
try:
column_width = max([len(row[i]) for row in out])
widths.append(column_width + 3)
i = i + 1
except:
break
#compose table string and pad out columns
status_string = '%s%s%s\n' % (out[0][0].ljust(widths[0]), out[0][1].ljust(widths[1]), out[0][2].ljust(widths[2]))
for x in out[1:]:
for y in (0,1,2):
status_string = status_string + ('%s' % x[y].rjust(widths[y]))
status_string = status_string + '\n'
return status_string
def init_last_sync():
if not os.path.isdir('stamps'):
os.mkdir('stamps')
now = time.time()
for repo in repos:
try:
last = os.stat('stamps/%s' % repo).st_mtime
repos[repo]['last-sync'], repos[repo]['last-attempt'] = last, last
logging.info('repo %s last synced %d seconds ago' % (repo, now - last))
except OSError:
repos[repo]['last-sync'], repos[repo]['last-attempt'] = 0, 0
logging.warning('repo %s has never been synced' % repo)
def update_last_attempt(repo, start_time):
repos[repo]['last-attempt'] = start_time
def update_last_sync(repo, start_time, duration, exit_code):
repos[repo]['last-sync'] = start_time
update_last_attempt(repo, start_time)
# touch the timestamp file
with open('stamps/%s' % repo, 'w') as f:
f.write('%d,%d,%d' % (start_time, duration, exit_code))
# open('stamps/%s' % repo, 'w').close()
def handler(signum, frame):
logging.info("Caught signal %d" % signum)
os.remove(earPath)
for job in jobs:
os.kill(job, signal.SIGTERM)
time.sleep(1)
for job in jobs:
os.kill(job, signal.SIGKILL)
try:
while True:
pid, status = os.wait()
logging.info("Child process %d has terminated" % pid)
#ECHILD is thrown when there are no child processes left
except OSError, e:
if e.errno != errno.ECHILD:
raise
logging.info("All child processes terminated. Goodbye!")
sys.exit()
def setup_logging():
if not os.path.isdir('logs'):
os.mkdir('logs')
try:
os.unlink(earPath)
except OSError:
pass
if '-d' in sys.argv:
logging.basicConfig(filename="/home/mirror/merlin/logs/merlin.log",
level=logging.DEBUG, format="%(asctime)-15s %(message)s")
else:
logging.basicConfig(level=logging.DEBUG, format="%(asctime)-15s %(message)s")
def sync(current, now):
pid = os.fork()
if not pid:
try:
logfd = os.open('logs/%s' % current, os.O_WRONLY|os.O_APPEND|os.O_CREAT, 0644)
nulfd = os.open('/dev/null', os.O_RDONLY)
os.dup2(nulfd, 0)
os.dup2(logfd, 1)
os.dup2(logfd, 2)
os.close(logfd)
os.close(nulfd)
os.execv("/bin/sh", ['sh', '-c', repos[current]['command']])
except OSError, e:
print >>sys.stderr, 'failed to exec: %s' % e
os.exit(1)
#There exists a race condition that manifests if merlin is asked to terminate ( by receiving signal ) after fork() but before the jobs table is updated.
#Normally you would mask off signals to avoid attempting shutdown in that time period, but this is not supported in the version of Python for which merlin was written.
#TODO:
#The Linux Programming Interface 24.5 illustrates how to synchronize parent and child using signals. Might be applicable.
jobs[pid] = {'name': current, 'start_time': now, 'status': 'running'}
def zfssync(current):
pid = os.fork()
if not pid:
try:
logfd = os.open('logs/zfssync-%s.log' % current, os.O_WRONLY|os.O_APPEND|os.O_CREAT, 0644)
nulfd = os.open('/dev/null', os.O_RDONLY)
os.dup2(nulfd, 0)
os.dup2(logfd, 1)
os.dup2(logfd, 2)
os.close(logfd)
os.close(nulfd)
os.execv("/home/mirror/bin/zfssync", ['zfssync', current])
except OSError, e:
print >>sys.stderr, 'failed to exec: %s' % e
os.exit(1)
def await_command(ear):
if select.select([ear],[],[],1) == ([ear],[],[]):
#handle command heard on ear socket
s, address = ear.accept()
cmdstring = ''
while True:
data = s.recv(cmd_buf_size)
if not data:
break
cmdstring = cmdstring + data
cmdstring = cmdstring.split(':',1)
command = cmdstring[0]
try:
if command == 'sync':
try:
arg = cmdstring[1]
if arg in repos:
if arg in (x['name'] for x in jobs.itervalues()):
logging.info('Cannot force sync: %s. Already syncing.' % arg)
s.send('Cannot force sync %s, already syncing.' % command)
else:
logging.info('Forcing sync: %s' % arg)
s.send('Forcing sync: %s' % arg)
sync(arg, time.time())
else:
logging.info('%s not tracked, cannot sync.' % arg)
s.send('%s not tracked, cannot sync.' % arg)
except:
s.send('Could not parse sync command, forced sync fails.')
raise
elif command == 'status':
s.send(mirror_status())
else:
logging.error('Received unrecognized command: %s' % command)
s.send('Bad command: %s' % command)
s.close()
except socket.error, e:
logging.error('Could not communicate with arthur over socket.')
def new_jobs(now):
for current in repos:
if len(jobs) >= MAX_JOBS:
break
if now <= repos[current]['last-attempt'] + mintime:
continue
if current in (x['name'] for x in jobs.itervalues()):
continue
when_due = repos[current]['last-sync'] + repos[current]['interval']
if now >= when_due:
logging.debug("syncing %s, due for %d seconds" % (current, now - when_due))
sync(current, now)
def handle_completed(now):
try:
pid, status = os.waitpid(-1, os.WNOHANG) #waitpid() likes to throw an exception when there's nothing to wait for
#Don't know if we should be forever ignoring hung jobs if they eventually return...
if pid != 0 and pid in jobs:
job = jobs.pop(pid)
if os.WIFSIGNALED(status):
logging.error("%s %s sync terminated with signal %d" % (job['status'], job['name'], os.WTERMSIG(status)))
update_last_attempt(job['name'], job['start_time'])
elif os.WIFEXITED(status):
exit_status = os.WEXITSTATUS(status)
logging.error("%s %s sync exited with status %d " % (job['status'], job['name'], exit_status))
update_last_attempt(job['name'], job['start_time'])
interval = repos[job['name']]['interval']
sync_took = int(now - job['start_time'])
next_sync = max(0, interval - sync_took)
update_last_sync(job['name'], job['start_time'], sync_took, exit_status)
logging.info('%s sync complete, took %d seconds, syncs again in %d seconds'
% (job['name'], sync_took, next_sync))
zfssync(job['name'])
except OSError, e:
if e.errno != errno.ECHILD:
raise
def check_hung(now):
for pid in jobs:
#check if hung
runtime = now - jobs[pid]['start_time']
repo = jobs[pid]['name']
if jobs[pid]['status'] == 'running' and runtime > repos[repo]['max-sync-time']:
jobs[pid]['status'] = 'hung'
logging.error("%s sync process hung, process pid %d" % (repo, pid))
def main():
if not os.geteuid():
print "Don't run merlin as root!"
sys.exit(1);
setup_logging()
logging.info("Starting Merlin")
signal.signal(signal.SIGTERM, handler)
signal.signal(signal.SIGINT, handler)
init_last_sync()
old_umask = os.umask(0o002)
ear = socket.socket(socket.AF_UNIX)
ear.bind(earPath)
ear.listen(1)
os.umask(old_umask)
os.chown(earPath, -1, grp.getgrnam("push").gr_gid)
while True:
await_command(ear)
now = time.time()
new_jobs(now)
handle_completed(now)
check_hung(now)
if __name__ == '__main__':
main()

13
merlin/merlin.service Normal file
View File

@ -0,0 +1,13 @@
[Unit]
Description=Manages synchronization of mirrored projects
After=network.target
[Service]
ExecStart=/home/mirror/merlin/merlin.py
WorkingDirectory=/home/mirror/merlin
User=mirror
Group=mirror
SyslogIdentifier=merlin
[Install]
WantedBy=multi-user.target

10
merlin/rebuild.sh Normal file
View File

@ -0,0 +1,10 @@
#!/bin/bash
IFS=$'\n'
for i in `cat merlin.py | sed -ne "s/^[ ]\+'command':[ ]*'\([^']\+\).*$/\1/pg"`; do
while [ "`jobs|wc -l`" -ge 9 ]; do
sleep 1
done
echo "$i"
logfile="`echo "$i" | cut -d' ' -f2 | sed -e's/\//_/g'`"
bash -c "$i" >rebuild_logs/$logfile 2>&1 &
done

47
merlin/test.py Executable file
View File

@ -0,0 +1,47 @@
#!/usr/bin/python
import merlin
merlin.repos = {
'gnu': {
'command': 'sleep 7',
'interval': 30,
'max-sync-time': 60,
},
'nongnu': {
'command': 'sleep 10',
'interval': 30,
'max-sync-time': 60,
},
'mysql': {
'command': 'sleep 30',
'interval': 60,
'max-sync-time': 60,
},
'mozdev': {
'command': 'sleep 5',
'interval': 5,
'max-sync-time': 60,
},
'gnome': {
'command': 'sleep 42',
'interval': 10,
'max-sync-time': 60,
},
'damnsmalllinux': {
'command': 'sleep 3; exit 1',
'interval': 15,
'max-sync-time': 60,
},
'linuxmint': {
'command': 'sleep 6',
'interval': 20,
'max-sync-time': 4,
},
}
merlin.mintime = 10
merlin.earPath= 'merlin.sock'
if __name__ == '__main__':
merlin.main()

1
merlin/test/debianBased Normal file
View File

@ -0,0 +1 @@
("debian", "debian-backports", "debian-cd", "debian-multimedia", "debian-ports", "debian-security", "debian-unofficial", "debian-volatile", "ubuntu", "ubuntu-ports", "ubuntu-ports-releases", "ubuntu-releases", "xubuntu-releases")

5
merlin/test/debian_update Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
#Dummy debian update script for testing
echo "Updating debian-type mirror: $1"
sleep 5

View File

@ -0,0 +1 @@
Wed Dec 30 03:17:50 UTC 2009

View File

@ -0,0 +1 @@
Sun Dec 27 03:17:50 UTC 2009

View File

@ -0,0 +1 @@
Sun Dec 27 03:17:50 UTC 2009

1
merlin/test/mirrors Normal file
View File

@ -0,0 +1 @@
{'debian': 'foo'}

5
merlin/test/sync Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
sleep 2
echo "done syncing $1"

View File

@ -0,0 +1,30 @@
repos = {
'gnu': {
'command': 'sleep 10',
'interval': daily,
},
'nongnu': {
'command': 'sleep 10',
'interval': daily,
},
'mysql': {
'command': 'sleep 10',
'interval': daily,
},
'mozdev': {
'command': 'sleep 10',
'interval': daily,
},
'gnome': {
'command': 'sleep 10',
'interval': daily,
},
'damnsmalllinux': {
'command': 'sleep 10',
'interval': daily,
},
'linuxmint': {
'command': 'sleep 10',
'interval': daily,
},
}

View File

@ -0,0 +1 @@
Tue Dec 29 02:47:07 UTC 2009

553
merlin/zfssync.yml Normal file
View File

@ -0,0 +1,553 @@
projects: {}
# CPAN:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: CPAN
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: CPAN
# CRAN:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: CRAN
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: CRAN
# CTAN:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: CTAN
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: CTAN
## FreeBSD:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: FreeBSD
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: FreeBSD
# macports-distfiles:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: MacPorts
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: MacPorts
# macports-release:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: MacPorts
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: MacPorts
## netbsd-pkgsrc:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: pkgsrc
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: pkgsrc
## netbsd:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: NetBSD
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: NetBSD
# OpenBSD:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: OpenBSD
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: OpenBSD
# alpine:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: alpine
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: alpine
## apache:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: apache
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: apache
## archlinux:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: archlinux
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: archlinux
# centos:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: centos
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: centos
## csclub:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: csclub
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: csclub
# cygwin:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: cygwin
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: cygwin
## damnsmalllinux:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: damnsmalllinux
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: damnsmalllinux
# debian:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: debian
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: debian
# debian-backports:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: debian-backports
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: debian-backports
# debian-cd:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: debian-cd
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: debian-cd
## debian-multimedia:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: debian-multimedia
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: debian-multimedia
## debian-ports:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: debian-ports
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: debian-ports
# debian-security:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: debian-security
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: debian-security
## debian-volatile:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: debian-volatile
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: debian-volatile
# eclipse:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: eclipse
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: eclipse
# fedora-epel:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: fedora
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: fedora
# fedora-enchilada:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: fedora
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: fedora
## gentoo-distfiles:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: gentoo-distfiles
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: gentoo-distfiles
## gentoo-portage:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: gentoo-portage
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: gentoo-portage
## gnome:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: gnome
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: gnome
# gnu:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: gnu
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: gnu
# gutenberg:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: gutenberg
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: gutenberg
## ipfire:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: ipfire
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: ipfire
## kali:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: kali
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: kali
## kali-images:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: kali-images
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: kali-images
## kde:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: kde
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: kde
## kernel.org-linux:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: kernel.org
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: kernel.org
## kernel.org-software:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: kernel.org
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: kernel.org
# linuxmint:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: linuxmint
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: linuxmint
# linuxmint-packages:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: linuxmint-packages
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: linuxmint-packages
# manjaro:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: manjaro
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: manjaro
## mysql:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: mysql
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: mysql
## nongnu:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: nongnu
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: nongnu
## opensuse:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: opensuse
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: opensuse
# pkgsrc:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: pkgsrc
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: pkgsrc
## puppylinux:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: puppylinux
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: puppylinux
## qtproject:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: qtproject
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: qtproject
# plt-bundles:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: racket
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: racket
# racket-installers:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: racket
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: racket
# raspbian:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: raspbian
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: raspbian
# raspberrypi:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: raspberrypi
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: raspberrypi
## sagemath:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: sage
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: sage
# saltstack:
# hosts:
# potassium-benzoate:
# pool: cscmirror1
# dataset: saltstack
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: saltstack
## slackware:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: slackware
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: slackware
## tdf:
## hosts:
## potassium-benzoate:
## pool: cscmirror1
## dataset: tdf
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: tdf
# ubuntu:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: ubuntu
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: ubuntu
## ubuntu-ports:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: ubuntu-ports
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: ubuntu-ports
## ubuntu-ports-releases:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: ubuntu-ports-releases
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: ubuntu-ports-releases
# ubuntu-releases:
# hosts:
# potassium-benzoate:
# pool: cscmirror2
# dataset: ubuntu-releases
# phys-1002-201.cloud.cs.uwaterloo.ca:
# pool: cscmirror1
# dataset: ubuntu-releases
## vlc:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: vlc
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: vlc
## wics:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: wics
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: wics
## x.org:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: x.org
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: x.org
## xiph:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: xiph
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: xiph
## xubuntu-releases:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: xubuntu-releases
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: xubuntu-releases
## scientific:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: scientific
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: scientific
## mxlinux:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: mxlinux
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: mxlinux
## mxlinux-iso:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: mxlinux-iso
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: mxlinux-iso
## parabola:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: parabola
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: parabola
## hyperbola:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: hyperbola
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: hyperbola
## trisquel:
## hosts:
## potassium-benzoate:
## pool: cscmirror2
## dataset: trisquel
## phys-1002-201.cloud.cs.uwaterloo.ca:
## pool: cscmirror1
## dataset: trisquel

View File

@ -0,0 +1,76 @@
[global]
# if enabled=0, no data is sent to the database
enabled=1
# server= is the URL to the MirrorManager XML-RPC interface
server=https://admin.fedoraproject.org/mirrormanager/xmlrpc
[site]
# if enabled=0, no data about this site is sent to the database
enabled=1
# Name and Password fields need to match the Site name and password
# fields you entered for your Site in the MirrorManager database at
# https://admin.fedoraproject.org/mirrormanager
name=Computer Science Club of the University of Waterloo
password=f3d0ra3743
[host]
# if enabled=0, no data about this host is sent to the database
enabled=1
# Name field need to match the Host name field you entered for your
# Host in the MirrorManager database at
# https://admin.fedoraproject.org/mirrormanager
name=mirror.csclub.uwaterloo.ca
# if user_active=0, no data about this category is given to the public
# This can be used to toggle between serving and not serving data,
# such enabled during the nighttime (when you have more idle bandwidth
# available) and disabled during the daytime.
# By not specifying user_active, the database will not be updated.
# user_active=1
[stats]
# Stats are only sent when run with the -s option
# and when this section is enabled.
# This feature is not presently implemented
enabled=0
apache=/var/log/httpd/access_log
vsftpd=/var/log/vsftpd.log
# remember to enable log file and transfer logging in rsyncd.conf
rsyncd=/var/log/rsyncd.log
# Content Categories
# These sections match the Categories for content tracked by MirrorManager.
#
# enabled=1 means information about this category will be sent to the database.
# enabled=0, no data about this host is sent to the database. If the
# database already has information for you for this Category, it will
# remain unchanged. This can be used to update the database after you
# have manually synced some infrequently-updated content, such as
# historical releases.
#
# path= is the path on your local disk to the top-level directory for this Category
[Fedora Linux]
enabled=1
path=/mirror/root/fedora/linux
[Fedora EPEL]
enabled=1
path=/mirror/root/fedora/epel
# lesser used categories below
[Fedora Secondary Arches]
enabled=0
path=/var/www/html/pub/fedora-secondary
[Fedora Other]
enabled=0
path=/var/www/html/pub/alt
# historical content
[Fedora Archive]
enabled=0
path=/var/www/html/pub/fedora-archive

69
old-crontab Normal file
View File

@ -0,0 +1,69 @@
# m h dom mon dow command
# make torrents
*/10 * * * * /home/mirror/bin/make-torrents > /dev/null 2> /dev/null
# These rsync cron jobs are now run by a small script that works a bit more
# intelligently than cron. For one thing, it won't kick off a sync when one's
# already running. Please see ~mirror/merlin.
# -- mspang
#
# bi-hourly
#
# 5 */2 * * * ~/bin/csc-sync-debian debian debian.mirror.rafal.ca debian ftp-master.debian.org
# 35 */2 * * * ~/bin/csc-sync-debian ubuntu archive.ubuntu.com ubuntu drescher.canonical.com
# 15 */2 * * * ~/bin/csc-sync-debian ubuntu-ports ports.ubuntu.com ubuntu-ports drescher.canonical.com
# 45 */2 * * * ~/bin/csc-sync-debian linuxmint-packages packages.linuxmint.com packages
#
# 5 */2 * * * ~/bin/csc-sync-debian debian-multimedia www.debian-multimedia.org debian marillat.net
# 10 */2 * * * ~/bin/csc-sync-debian debian-backports www.backports.org backports.org www.backports.org
# 15 */2 * * * ~/bin/csc-sync-debian debian-volatile volatile-master.debian.org debian-volatile volatile-master.debian.org
# 20 */2 * * * ~/bin/csc-sync-debian debian-security security.debian.org debian-security security-master.debian.org
# 25 */2 * * * ~/bin/csc-sync-debian debian-unofficial debian-maintainers.org unofficial
# 30 */2 * * * ~/bin/csc-sync-standard ubuntu-releases rsync.releases.ubuntu.com releases
# 35 */2 * * * ~/bin/csc-sync-standard xubuntu-releases cdimage.ubuntu.com cdimage/xubuntu/releases/
#
##
## daily
##
# 5 3,15 * * * ~/bin/csc-sync-debian emdebian www.emdebian.org debian
# 5 3,15 * * * ~/bin/csc-sync-standard CPAN rsync.nic.funet.fi CPAN
# 5 3,15 * * * ~/bin/csc-sync-standard CRAN cran.r-project.org CRAN
# 5 3,15 * * * ~/bin/csc-sync-standard CTAN carroll.aset.psu.edu ctan
# 5 3,15 * * * ~/bin/csc-sync-standard openoffice rsync.services.openoffice.org openoffice-extended
## 5 3,15 * * * ~/bin/csc-sync-standard fedora/epel fedora-archives.ibiblio.org fedora-epel && ~/bin/report_mirror >/dev/null
# 5 4,16 * * * ~/bin/csc-sync-standard cygwin cygwin.com cygwin-ftp
#
## merlinized - do not touch - mspang
## 5 4,16 * * * ~/bin/csc-sync-standard gnu ftp.ibiblio.org pub/gnu/ftp/gnu/
## 5 4,16 * * * ~/bin/csc-sync-standard nongnu dl.sv.gnu.org releases --ignore-errors
## 5 5,17 * * * ~/bin/csc-sync-standard mysql mysql.he.net mysql
## 5 5,17 * * * ~/bin/csc-sync-standard mozdev rsync.mozdev.org mozdev
## 5 6,18 * * * ~/bin/csc-sync-standard gnome ftp.gnome.org gnome
## 5 6,18 * * * ~/bin/csc-sync-standard damnsmalllinux ftp.heanet.ie mirrors/damnsmalllinux.org/
## 5 7,19 * * * ~/bin/csc-sync-standard linuxmint ftp.heanet.ie pub/linuxmint.com/
#
# 5 4,16 * * * ~/bin/csc-sync-standard kernel.org/linux kernel.org all/linux/
# 5 4,16 * * * ~/bin/csc-sync-standard kernel.org/software kernel.org all/software/
# 5 4,16 * * * ~/bin/csc-sync-standard apache rsync.us.apache.org apache-dist
# 5 4.16 * * * ~/bin/csc-sync-standard eclipse download.eclipse.org eclipseMirror
# 5 5,17 * * * ~/bin/csc-sync-standard kde master.kde.org kdeftp
# 5 5,17 * * * ~/bin/csc-sync-standard blastwave master.rsync.blastwave.org blastwave
# 5 5,17 * * * ~/bin/csc-sync-standard archlinux mirrors.kernel.org archlinux
# 5 5,17 * * * ~/bin/csc-sync-standard debian-ports ftp.debian-ports.org debian --ignore-errors
# 5 5,17 * * * ~/bin/csc-sync-standard slackware slackware.cs.utah.edu slackware
# 5 5,17 * * * ~/bin/csc-sync-debian-cd
# 5 6,18 * * * ~/bin/csc-sync-standard x.org xorg.freedesktop.org xorg-archive
# 5 6,18 * * * ~/bin/csc-sync-standard centos us-msync.centos.org CentOS
# 5 6,18 * * * ~/bin/csc-sync-standard opensuse stage.opensuse.org opensuse-full/opensuse/ #"--exclude distribution/.timestamp_invisible"
# 5 7,19 * * * ~/bin/csc-sync-standard FreeBSD ftp1.ca.freebsd.org freebsd
## 5 7,19 * * * ~/bin/csc-sync-standard fedora/linux fedora-archives.ibiblio.org fedora-enchilada/linux/ --ignore-errors && ~/bin/report_mirror >/dev/null
# 5 7,19 * * * ~/bin/csc-sync-standard ubuntu-ports-releases cdimage.ubuntu.com cdimage/ports/releases/
#
##
## other
##
# 29 */4 * * * RSYNC_USER=gentoo RSYNC_PASSWORD=vidgeryd ~/bin/csc-sync-standard gentoo-distfiles masterdistfiles.gentoo.org gentoo
# 15,45 * * * * ~/bin/csc-sync-standard gentoo-portage rsync1.us.gentoo.org gentoo-portage
# 5,35 * * * * ~/bin/csc-sync-standard mozilla.org releases-rsync.mozilla.org mozilla-releases