Vinod Kurup

Hospitalist/programmer in search of the meaning of life

Mar 8, 2004 - 7 minute read - Comments - darwinports mac-os-x openacs

darwinports

In the process of installing Panther from scratch, I decided to use DarwinPorts. It’s a packaging system for the abundant UNIX free software available on Mac OS X. I had previously used Fink which, while being an admirable project, seemed a little opaque to me. I never quite knew where data was being installed and which locations were being searched to see if prerequisites were present. Darwinports seems to be a little better at finding programs/libraries that are already installed. I freely admit my ignorance in this matter, though. It may simply be that I understand Darwinports better than I did Fink, but that, in itself is an important criteria.

Here are the ports that I currently have installed:

[02:17:13 vinod]:~ $ port installed The following ports are installed:
     gettext-0.10.40
     keychain-2.0.3
     mutt-1.4.2.1
     offlineimap-4.0.0
     readline-4.3
     urlview-0.9
     wget-1.9.1

All of the above ports are available from darwinports except offlineimap. I built that one myself. Here is the Portfile for offlineimap:

[02:18:42 vinod]:~/dports-dev/mail/offlineimap $ cat Portfile 
# $Id: $  PortSystem          1.0 
name                offlineimap 
version             4.0.0 
categories          mail 
maintainers         vinod@kurup.org 
platforms           darwin 
homepage            http://gopher.quux.org:70/devel/offlineimap 
description         IMAP/Maildir synchronization and reader support 
long_description    OfflineIMAP is a tool to simplify your e-mail reading. With OfflimeIMAP, you can:   
                    * Read the same mailbox from multiple computers, and have your     
                      changes (deletions, etc.) be automatically reflected on     
                      all computers   
                    * Use various mail clients to read a single mail box   
                    * Read mail while offline (on a laptop) and have all changes     
                      synchronized when you get connected again   
                    * Read IMAP mail with mail readers that do not support IMAP   
                    * Use SSL (secure connections) to read IMAP mail even if your reader     
                      doesn't support SSL   
                    * Synchronize your mail using a completely safe and fault-tolerant     
                      algorithm.  (At least I think it is!)   
                    * Customize which mailboxes to synchronize with regular expressions     
                      or lists.   
                    * Synchronize your mail two to four times faster than with other tools     
                      or other mail readers' internal IMAP support.  In short, OfflineIMAP 
                      is a tool to let you read mail how YOU want to.   
distname            ${name}_${version} 
master_sites        http://gopher.quux.org:70/devel/offlineimap/ 
checksums           md5 13e355c8a957dddfe9b7486821d83370  
depends_lib         bin:python2.3:python23  
# tarball extracts as offlineimap, not offlineimap-4.0.0 
worksrcdir          ${name}  
use_configure       no  
build.cmd           python setup.py 
build.target        build  
destroot.cmd        python setup.py 
destroot.destdir    --prefix=${destroot}${prefix}  
post-destroot {         # remind user to define/add installed path to python path         
                        ui_msg " be sure the install path is included in your python path:"
                        # FIXME: hardcoding 2.3 isn't right here         
                        ui_msg "tcsh: setenv PYTHONPATH $PYTHONPATH:${prefix}/lib/python2.3/site-packages"         
                        ui_msg "bash: export PYTHONPATH=$PYTHONPATH:${prefix}/lib/python2.3/site-packages"          
                        xinstall -d -m 755 ${destroot}${prefix}/share/doc/${name}         
                        xinstall -m 644 -W ${worksrcpath} manual.html manual.pdf manual.ps
                                 manual.txt ChangeLog COPYING COPYRIGHT UPGRADING
                                 ${destroot}${prefix}/share/doc/${name}         
                        xinstall -d -m 755 ${destroot}${prefix}/share/doc/${name}/examples
                        xinstall -m 644 -W ${worksrcpath} offlineimap.conf offlineimap.conf.minimal                  
                                 ${destroot}${prefix}/share/doc/${name}/examples
                        install -m 644 -W ${worksrcpath} offlineimap.1
                                ${destroot}${prefix}/share/man/man1 }

I haven’t submitted this to the darwinports project yet because they recommend installing the software on a fresh Mac OS X install to be sure that you’ve correctly specified any dependencies. I haven’t had time to do that yet. I hope to, though, because I find the port system to be very intuitive. Building an OpenACS port on Mac OS X seems doable, but I need to look into how upgrades are handled. Walter McGinnis has sent me his instructions for building an OpenACS installation using darwinports, but I think we can make it easier, by building simple Portfiles for each of the required pieces. Here are Walter’s detailed instructions:

# install darwinports 
# see instructions on darwinports.com # using openacs-head for server name for convoluted reasons... 
# i also use ~/Development/web rather than /var/lib/aolserver for my 
# server source 
# darwinports working directories under ~/Development/darwinports/ 
# you need to add /opt/local/bin or something to your path in .bash_profile 
# for the port command to work 
# see darwinports install doc for details
Delphy:~ walter$ sudo port -v -d install postgresql
Password: ...
Delphy:~ walter$ pwd
/Users/walter
Delphy:~ walter$ cd /opt/local/
Delphy:/opt/local walter$ mkdir pgsql
Delphy:/opt/local walter$ mkdir pgsql/data
Delphy:/opt/local walter$ sudo chown -R walter:staff pgsql
Delphy:/opt/local walter$ initdb -D pgsql/data
...
Delphy:/opt/local walter$ pg_ctl -D pgsql/data -l logfile start
postmaster successfully started
Delphy:/opt/local walter$ createlang plpgsql template1
Delphy:/opt/local walter$ createdb openacs-head
CREATE DATABASE
Delphy:/opt/local walter$ createuser openacs-head
Shall the new user be allowed to create databases? (y/n) y
Shall the new user be allowed to create more new users? (y/n) y
CREATE USER
Delphy:/opt/local walter$ sudo port -v -d install aolserver
... 
Delphy:/opt/local walter$ cd ~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo port -v -d install wget
Password: ... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo wget http://twtelecom.dl.sourceforge.net/sourceforge/aolserver/nscache-1.5.tar.gz
Password: ... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo wget http://twtelecom.dl.sourceforge.net/sourceforge/aolserver/nspostgres-4.0.tar.gz
...
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo wget http://twtelecom.dl.sourceforge.net/sourceforge/aolserver/nssha1-0.1.tar.gz
... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo cvs -d:pserver:anonymous@cvs.sourceforge.net:/cvsroot/aolserver login
Password:  (Logging in to anonymous@cvs.sourceforge.net)
CVS password: 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo cvs -z3 -d:pserver:anonymous@cvs.sourceforge.net:/cvsroot/aolserver co nsrewrite
... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ sudo tar xvfz nscache-1.5.tar.gz ; sudo tar xvfz nssha1-0.1.tar.gz ; sudo tar xvfz nspostgres-4.0.tar.gz
... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ cd nscache-1.5 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nscache-1.5 walter$ NSHOME=..; export NSHOME 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nscache-1.5 walter$ sudo make install
... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nscache-1.5 walter$ cd ../nsrewrite/ 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nsrewrite walter$ sudo make install
... 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nsrewrite walter$ cd .. 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0 walter$ cd nssha1-0.1 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nssha1-0.1 walter$ sudo make install 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nssha1-0.1 walter$ cd ../nspostgres-4.0 
# edit the Makefile from:
# < MODLIBS   = -L$(PGLIB) -lpq
# to
# MODLIBS   = -L$(PGLIB) -lpq -lnsdb 
Delphy:~/Development/darwinports/dports/www/aolserver/work/aolserver-4.0/nspostgres-4.0 walter$ sudo make install POSTGRES=/opt/local ACS=1
... 
Delphy:/opt/local walter$ cd /opt/local 
Delphy:/opt/local walter$ sudo mkdir src
Delphy:/opt/local walter$ sudo chmod 775 src  
Delphy:/opt/local walter$ wget http://www.tdom.org/tDOM-0.7.8.tar.gz
...
Delphy:/opt/local walter$ cd src/
Delphy:/opt/local/src walter$ tar xvfz ../tDOM-0.7.8.tar.gz 
Delphy:/opt/local/src walter$ rm ../tDOM-0.7.8.tar.gz
Delphy:/opt/local/src walter$ cd tDOM-0.7.8/ 
Delphy:/opt/local/src/tDOM-0.7.8 walter$ cd unix/
Delphy:/opt/local/src/tDOM-0.7.8/unix walter$ ../configure --enable-threads --disable-tdomalloc --prefix=/opt/local/aolserver 
Delphy:/opt/local/src/tDOM-0.7.8/unix walter$ sudo make install
... 
Delphy:/opt/local/src/tDOM-0.7.8/unix walter$ cd
Delphy:~ walter$ cd Development/
Delphy:~/Development walter$ mkdir web
Delphy:~/Development walter$ cd web 
Delphy:~/Development/web walter$ cvs -z3 -d walterg@openacs.org:/cvsroot co acs-core
password: ... 
# i move the openacs code to openacs-head
# mv openacs-4 openacs-head
# edit openacs-head/etc/config.tcl to be localhost, etc
! set hostname                  localhost
! set address                   127.0.0.1 
! set server                    "openacs-head"  
set servername                "New OpenACS Installation - Development" 
! set serverroot                "/Users/walter/Development/web/${server}" 
Delphy:~/Development/web/openacs-head walter$ /opt/local/aolserver/bin/nsd -ft ~/Development/web/openacs-head/etc/config.tcl

I think Fink and Darwinports can coexist since Fink is installed in /sw and Darwinports in /opt/local, but I’m sticking with Darwinports alone, for now.

Comments from old site

darwinports distribution

I agree that a lot of the build process for an openacs install could be handled with things like dependencies on other ports using Darwinports.

It would be pretty slick to be able to do something like the following:

sudo port install openacs -type personal -name myopenacs.com -ip 123.456.78.9 -admin_email admin@myopenacs.com

I'm not sure that the above passing of install parameters can happen yet with the port command. A scenario like this may be necessary:

sudo port install openacs-deps-and-src
sudo /opt/local/openacs/configure -type personal -name myopenacs.com -ip 123.456.78.9 -admin_email admin@myopenacs.com
sudo openacs start # really a wrapper for for checking for db running, if not start it, and starting a server's nsd process

After getting to one or the other scenarios its a simple step to creating an Applescript UI for configuring the parameters and calling the command.

It is also important to note that Darwinports, at least in theory, could be used on other Unix variants like Linux and FreeBSD. The big trouble I see is that it isn't compliant with the standards that the OpenACS docs use. However, I must admit that I'm kind of partial to the idea of having a separate /opt branch for stuff.

Walter McGinnis 2004-03-20 14:28:36

Update

I'm in the process of installing OpenACS for development on a different machine than previously. I've updated to the latest versions of software. As a result, if you follow my previous directions, you'll need to add a new flag to the "sudo port -v -d install" commands in order to keep the source files around (you need these to build other software later), namely "-k" for keep the working directories.

Walter McGinnis 2004-12-03 17:10:46

Mar 8, 2004 - 2 minute read - Comments - mac-os-x installation

Don't use "Archive and Install"

Thanks to the fine folks at badgertronics, I was able to upgrade my Powerbook to Panther free-of-charge. I initially did an “Archive and Install” upgrade, as recommended by the upgrade guide at TidBITS (easily worth the measly $5, IMHO). But soon after the install, I noticed that my system seemed a lot slower. I couldn’t pinpoint a specific problem, but things just seemed sluggish. I definitely noticed that DNS seemed to be broken. I was moved to act when I noticed that my IMAP client (offlineimap) took 15 minutes to sync 48 IMAP Maildirs between my Mac and my server. According to the offlineimap docs, syncing 50 Maildir folders should take around 3 seconds. Hmmm…. that seems a little wrong.

I searched the Internet but was unable to find a specific fix for this problem. A few posters intimated that doing a fresh “Erase and Install” of Mac OS X 10.3 was better than using the “Archive and Install” option. So, this weekend, I backed up my HD using Carbon Copy Cloner, erased my HD, and did a fresh install of Mac OS X Panther.

As expected, everything is quick and zippy again. A sync of my IMAP folders takes 15 seconds now, as opposed to 15 minutes. While I’m happy with the result, I wish I could figure out why the “Archive and Install” option resulted in such a poor outcome.

Feb 8, 2004 - 1 minute read - Comments - weird

Christmas Break

I uploaded the pics from my trip home over Christmas. Nope, this is not our house. Christmas Lights

Comments from old site

untitled

wow. I wish I had that much spare time.

Mark Dalrymple 2004-02-11 16:06:45

info@myXmasHouse.com

Please don't hesitate to delete this post if you believe it is inappropriate.

In late 2007 we are launching a website where people like yourself can upload their Christmas decorated houses. If there is enough interest it may later evolve to a contest. myXmasHouse dot com is the address

Thank you and my apologies for the intrusion.

Unregistered Visitor 2007-10-16 07:10:40

Jan 15, 2004 - 3 minute read - Comments - mac-os-x openacs software

Compiling OpenFTS on Mac OS X

It wasn’t easy, but I finally got OpenFTS-0.3.2-tcl to compile on Mac OS X (10.2.8). I started by reading the excellent Mac OS X porting guides from Fink and Apple. Unfortunately, the OpenFTS Makefile and configure scripts aren’t standard, so I had to muck around with things alot. Here’s the diff:

diff -U 2 -rbB Search-OpenFTS-tcl-0.3.2/aolserver/Makefile  Search-OpenFTS-tcl-0.3.2-vk/aolserver/Makefile 
--- Search-OpenFTS-tcl-0.3.2/aolserver/Makefile	Tue Nov 19 14:24:44 2002 
+++ Search-OpenFTS-tcl-0.3.2-vk/aolserver/Makefile	Wed Jan 14 23:23:13 2004 
@@ -13,6 +13,6 @@  
OBJ       = $(SOBJ) $(POBJ) $(OOBJ) nsfts.o  NSFTSLIB  = nsfts.so -LDSO      
= gcc -shared  INC       = -I../include -I$(NS_HOME)/include -I$(NS_HOME)/nsd +MODLIBS += -L/sw/lib -L/usr/local/aolserver/lib -ltcl8.4 -lnsd    .SUFFIXES: .c .h .so .l @@ -26,5 +26,5 @@    $(NSFTSLIB): $(OBJ) -	$(LDSO) $(OBJ) $(LIBS) -o $(NSFTSLIB) +	$(LDSO) $(OBJ) $(LIBS) -o $(NSFTSLIB) $(MODLIBS)    clean: diff -U 2 -rbB Search-OpenFTS-tcl-0.3.2/configure Search-OpenFTS-tcl-0.3.2-vk/configure --- Search-OpenFTS-tcl-0.3.2/configure	Tue Nov 19 14:24:44 2002 +++ Search-OpenFTS-tcl-0.3.2-vk/configure	Wed Jan 14 22:36:44 2004 @@ -2771,4 +2771,16 @@  fi   +# +# Mac OS X 10.2 +# +# vinodk: not sure if all of this is needed/accurate +if test `uname` = "Darwin"; then +    PLATFORM="osx" +    CC="cc" +    LD="cc" +    CFLAGS="$CFLAGS -no-cpp-precomp" +    LDSO="$LD -flat_namespace -bundle" +fi +  if test $PLATFORM = "unknown"; then          { { echo "$as_me:2774: error: unknown platform" >&amp;5 

Then, follow the instructions in AOLSERVER.INSTALL. I’m not too handy with this Makefile/configure stuff, so I’d appreciate any guidance on how to do this properly :-).

Comments from old site

Doesn't compile on Panther

Hi Vinod,

Your fix for makefile doesn't seem to be working on Panther. Configure script goes through but when compiling I get the following error:

cc -o Parser.so -shared Parser.o deflex.o flexparser.o
cc: unrecognized option `-shared'
ld: Undefined symbols:
_main
_Tcl_AppendElement
_Tcl_AppendResult
_Tcl_CreateObjCommand
_Tcl_GetIntFromObj
_Tcl_GetStringFromObj
_Tcl_NewIntObj
_Tcl_NewStringObj
_Tcl_ObjSetVar2
_Tcl_PkgProvide
_Tcl_SetObjResult
make[1]: *** [Parser.so] Error 1
make: *** [parser] Error 2

I'll keep on searching for what should be changed and will keep you posted. Otherwise thanks for spending your time on OS X issues.

Jarkko Laine 2004-02-19 02:31:53

-shared option not supported

OK, from apple dev docs:

-shared
In GCC 2, generates shared liibraries. In GCC 3, this option is not supported, so you should use libtool (or use ld directly) instead. Shared libraries in Mac OS X may be different from those you are accustomed to on other platforms. See “Dynamic Libraries and Plug-ins”.

Don't know how to fix this in this case, though.

Jarkko Laine 2004-02-19 02:57:33

untitled

Hey Jarkko,

I think you're trying to compile all of OpenFTS and that doesn't work (at least I haven't tried to get it to work). I just wanted to compile the nsfts.so module, which is all that you need for OpenACS. So, instead of doing make in the Search-OpenFTS-tcl-0.3.2 directory, cd aolserver and do the make there.

I think if you follow the directions in the AOLSERVER.INSTALL file, it'll work.

Vinod Kurup 2004-02-19 23:50:27

untitled

Thanks, Vinod,

You're right. I actually thought I was running the make in aolserver dir, but now that I retried it there, it went through smoothly :o)

Jarkko Laine 2004-02-20 01:42:53

Jan 10, 2004 - 5 minute read - Comments - openacs mac-os-x

Installing OpenACS via Darwinports

Installing OpenACS via Darwinports

by Vinod Kurup


I’ve created a Darwinports installation for OpenACS. (What is Darwinports?). This will allow you to set up a quick test installation to try out OpenACS. I wouldn’t recommend it for a production (or even a development) server. For those purposes, I’d strictly follow the OpenACS documentation.

Download my OpenACS portfiles and stick them in your local darwinports repository. If you don’t already have a local darwinports repository, create one and add its location to /opt/local/etc/ports/sources.conf (I use file:///Users/vinod/dports-dev). Here’s the steps I took to get everything to work (on Mac OS X Panther).

:~$ wget http://www.kurup.org/files/openacs-dports.tgz
:~$ tar xvzf openacs-dports.tgz
dports-dev/www/
dports-dev/www/aolserver-nscache/
dports-dev/www/aolserver-nscache/Portfile
dports-dev/www/aolserver-nspostgres/
dports-dev/www/aolserver-nspostgres/files/
dports-dev/www/aolserver-nspostgres/files/patch-Makefile.diff
dports-dev/www/aolserver-nspostgres/Portfile
dports-dev/www/aolserver-nssha1/
dports-dev/www/aolserver-nssha1/Portfile
dports-dev/www/openacs/
dports-dev/www/openacs/files/
dports-dev/www/openacs/files/patch-config.diff
dports-dev/www/openacs/files/patch-functions.diff
dports-dev/www/openacs/files/patch-install-sh.diff
dports-dev/www/openacs/files/patch-install-tcl.diff
dports-dev/www/openacs/files/patch-nsd-postgres.diff
dports-dev/www/openacs/Portfile
dports-dev/www/tclwebtest/
dports-dev/www/tclwebtest/Portfile
dports-dev/textproc/
dports-dev/textproc/tdom/
dports-dev/textproc/tdom/Portfile
-->:~$ cd dports-dev/
:~/dports-dev$ portindex
Creating software index in /Users/vinod/dports-dev
Adding port textproc/tdom
Adding port www/aolserver-nscache
Adding port www/aolserver-nspostgres
Adding port www/aolserver-nssha1
Adding port www/openacs
Adding port www/tclwebtest

Total number of ports parsed:	6 
Ports successfully parsed:	6	 
Ports failed:			0

:~/dports-dev$ cd
:~$ sudo port install postgresql +server
--->  Fetching postgresql
--->  Attempting to fetch postgresql-7.4.8.tar.bz2 from ftp://ftp2.ch.postgresql.org/mirror/postgresql/source/v7.4.8
--->  Verifying checksum(s) for postgresql
--->  Extracting postgresql
--->  Applying patches to postgresql
--->  Configuring postgresql
--->  Building postgresql with target all
--->  Staging postgresql into destroot
--->  Packaging tgz archive for postgresql 7.4.8_0+darwin_7+server
--->  Installing postgresql 7.4.8_0+darwin_7+server

:~$ sudo su postgres -c '/opt/local/bin/initdb -D /opt/local/var/db/pgsql/defaultdb'

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /opt/local/var/db/pgsql/defaultdb... ok
creating directory /opt/local/var/db/pgsql/defaultdb/base... ok
creating directory /opt/local/var/db/pgsql/defaultdb/global... ok
creating directory /opt/local/var/db/pgsql/defaultdb/pg_xlog... ok
creating directory /opt/local/var/db/pgsql/defaultdb/pg_clog... ok
selecting default max_connections... 50
selecting default shared_buffers... 300
creating configuration files... ok
creating template1 database in /opt/local/var/db/pgsql/defaultdb/base/1... ok
initializing pg_shadow... ok
enabling unlimited row size for system tables... ok
initializing pg_depend... ok
creating system views... ok
loading pg_description... ok
creating conversions... ok
setting privileges on built-in objects... ok
creating information schema... ok
vacuuming database template1... ok
copying template1 to template0... ok

Success. You can now start the database server using:

    /opt/local/bin/postmaster -D /opt/local/var/db/pgsql/defaultdb
            or
    /opt/local/bin/pg_ctl -D /opt/local/var/db/pgsql/defaultdb -l logfile start

    :~$ sudo su postgres -c '/opt/local/bin/pg_ctl -D /opt/local/var/db/pgsql/defaultdb -l /opt/local/var/log/pgsql/defaultdb.log start'
    postmaster successfully started

    :~$ sudo port install openacs
    
    --->  Fetching aolserver
    --->  Attempting to fetch aolserver-4.0.10-src.tar.gz from http://kent.dl.sourceforge.net/aolserver
    --->  Verifying checksum(s) for aolserver
    --->  Extracting aolserver
    --->  Configuring aolserver
    --->  Building aolserver with target all
    --->  Staging aolserver into destroot
    --->  Packaging tgz archive for aolserver 4.0.10_0
    --->  Installing aolserver 4.0.10_0
    
    AOLserver installed into /opt/local/aolserver
    
    You now need to configure the server to your needs. You 
    might want to create another user (e.g. aolserver) to run 
    the server.
    --->  Activating aolserver 4.0.10_0
    --->  Cleaning aolserver
    --->  Fetching aolserver-nscache
    --->  Attempting to fetch nscache-1.5.tar.gz from http://kent.dl.sourceforge.net/aolserver
    --->  Verifying checksum(s) for aolserver-nscache
    --->  Extracting aolserver-nscache
    --->  Configuring aolserver-nscache
    --->  Building aolserver-nscache with target all
    --->  Staging aolserver-nscache into destroot
    --->  Packaging tgz archive for aolserver-nscache 1.5_0
    --->  Installing aolserver-nscache 1.5_0
    --->  Activating aolserver-nscache 1.5_0
    --->  Cleaning aolserver-nscache
    --->  Fetching aolserver-nspostgres
    --->  Attempting to fetch nspostgres-4.0.tar.gz from http://kent.dl.sourceforge.net/aolserver
    --->  Verifying checksum(s) for aolserver-nspostgres
    --->  Extracting aolserver-nspostgres
    --->  Applying patches to aolserver-nspostgres
    --->  Configuring aolserver-nspostgres
    --->  Building aolserver-nspostgres with target all
    --->  Staging aolserver-nspostgres into destroot
    --->  Packaging tgz archive for aolserver-nspostgres 4.0_0
    --->  Installing aolserver-nspostgres 4.0_0
    --->  Activating aolserver-nspostgres 4.0_0
    --->  Cleaning aolserver-nspostgres
    --->  Fetching aolserver-nssha1
    --->  Attempting to fetch nssha1-0.1.tar.gz from http://kent.dl.sourceforge.net/aolserver
    --->  Verifying checksum(s) for aolserver-nssha1
    --->  Extracting aolserver-nssha1
    --->  Configuring aolserver-nssha1
    --->  Building aolserver-nssha1 with target all
    --->  Staging aolserver-nssha1 into destroot
    --->  Packaging tgz archive for aolserver-nssha1 0.1_0
    --->  Installing aolserver-nssha1 0.1_0
    --->  Activating aolserver-nssha1 0.1_0
    --->  Cleaning aolserver-nssha1
    --->  Fetching tclwebtest
    --->  Verifying checksum(s) for tclwebtest
    --->  Extracting tclwebtest
    --->  Configuring tclwebtest
    --->  Building tclwebtest with target all
    --->  Staging tclwebtest into destroot

    be sure that /opt/local/lib is included in your TCL path:
    tcsh: setenv TCLLIBPATH "$TCLLIBPATH /opt/local/lib"
    bash: export TCLLIBPATH "$TCLLIBPATH /opt/local/lib"
    --->  Packaging tgz archive for tclwebtest 0.9_0
    --->  Installing tclwebtest 0.9_0
    --->  Activating tclwebtest 0.9_0
    --->  Cleaning tclwebtest
    --->  Fetching tdom
    --->  Attempting to fetch tDOM-0.8.0.tar.gz from http://www.tdom.org/files/
    --->  Verifying checksum(s) for tdom
    --->  Extracting tdom
    --->  Configuring tdom
    --->  Building tdom with target all
    --->  Staging tdom into destroot
    
    be sure that /opt/local/lib is included in your TCL path:
    tcsh: setenv TCLLIBPATH "$TCLLIBPATH /opt/local/lib"
    bash: export TCLLIBPATH="$TCLLIBPATH /opt/local/lib"
    --->  Packaging tgz archive for tdom 0.8.0_0
    --->  Installing tdom 0.8.0_0
    --->  Activating tdom 0.8.0_0
    --->  Cleaning tdom
    --->  Fetching openacs
    --->  Attempting to fetch openacs-5.1.5.tar.gz from http://openacs.org/projects/openacs/download/download/
    --->  Verifying checksum(s) for openacs
    --->  Extracting openacs
    --->  Applying patches to openacs
    --->  Configuring openacs
    --->  Building openacs with target all
    --->  Staging openacs into destroot
    --->  Packaging tgz archive for openacs 5.1.5_0
    --->  Installing openacs 5.1.5_0
    --->  Activating openacs 5.1.5_0
    Running the OpenACS install script. (could take 10-15 minutes)
    OpenACS should now be running with the following settings:
    url:          http://127.0.0.1
    serverroot:   /opt/local/openacs/service0
    logs:         /opt/local/openacs/service0/log
    PID files:    /opt/local/aolserver/log
    admin user:   admin@localhost
    admin passwd: 1
    UNIX uid:     service0
    UNIX gid:     web
    
    Adjust AOLserver settings in /opt/local/openacs/service0/etc/config.tcl
    Adjust OpenACS settings through the web interface.
    --->  Cleaning openacs
    :~$ 

Be patient once the install script starts running. It takes a long time. Again, this is just a quick and dirty way of trying out OpenACS. I’d be happy to hear any comments or questions.


Jan 5, 2004 - 1 minute read - Comments - mac-os-x free-software

Open Source saved my photos!

Loyal readers of my blog will remember that my Powerbook Wallstreet G3 died on September 21st, 2002. Since I’m probably the only one that remembers, I’ll point you to that blog entry. The good news was that the hard drive was intact, the bad news was that I didn’t have an easy way of accessing it. I have a Firewire enclosure, and my old Powerbook was running Linux, so I figured it wouldn’t be too difficult to get my new Powerbook to read the data off the old HD. I mean Mac OS X and Linux are the same underneath, right? Well, not quite. Mac OS X uses a filesystem called HFS+ and my version of Linux uses a filesystem called ext2. Turns out that no one had written an ext2 driver for Mac OS X.

So, I turned to Google and with more effort than usual, found the Mac OS X Ext2 Filesystem Project, or ext2fsx. I subscribed to the RSS feed and just waited until I saw the 1.0.1 product released on 11/20/2003 (Never trust the 1.0!). I loaded it up and voila! My old Wallstreet was back. Thank you Brian Bergstrand!

In celebration, I’ve uploaded pictures from 2001 and early 2002 that I thought I had lost forever. Enjoy.

Jan 2, 2004 - 10 minute read - Comments - rsync sysadmin backup

Backup strategy

Badgertronics.com, borkware.com, and company went through the same ordeal that kurup.org did and MarkD has promptly published his new backup strategy. He uses rsync over SSH which, interestingly enough, is the same strategy that I came up with. Great minds must think alike.

The canonical standard for doing backups on UNIX systems is tar (short for tape archive). tar converts directories of files into a single tar archive file which you would then spit to an attached tape drive. To be up-to-date, you need to do this on a regular basis - perhaps daily. So each day, you’d tar up all your important files and send them off to the tape drive. To recover your files, you’d need to read the whole tar archive file back from the tape and then extract the specific files or directories that you want from the tar archive. This works fine if you’re backing up to a tape drive, because you don’t care about network bandwidth. If you’re backing up over the network to another machine’s hard drive, then each daily archive file has to be sent over the network to your backup machine. Even if you don’t change any files from one day to the next, the entire set of files gets archived and sent across the network. Bye bye precious bandwidth!

rsync is a remote synchronization tool. The first time, rsync sends all the data over the network to your backup machine. Just like tar. The benefit comes the next time you backup. Instead of sending all the files again, rsync only transfers files that have been changed. If no files were changed, no files get transferred. And when you want to recover data, you transfer just the specific files that you want back to your machine (using rsync or scp or telnet or whatever).

Note that rsync also works better than an incremental backup strategy using tar. You can use tar to do a full backup weekly and incremental backups daily. Incremental backups backup just files which have changed since yesterday’s backup. This will improve bandwidth usage, but makes recovery more complex. You have to extract the latest full backup, then extract any more recent incremental backups over your full backup and then extract the specific files that you need. On the other hand, the backup produced with rsync is always up-to-date (as of the last time you ran rsync).

There are lots of backup tools that use rsync as their workhorse and add features on top of it. rdiff-backup makes regularly scheduled backups, but each time it runs, it keeps a copy of the changes to any files which have changed (i.e. the diff’s) as well as the date and time of those changes. So, while the backup filesystem is a perfect mirror of your system, a few extra commands can give you a copy of your filesystem as it was, say, 5 days ago. rdiff-backup takes the backup filesystem and applies any diffs that occurred over the past five days. Diff files only record changes from one version of a file to another, so they are very small (in proportion to the entire backup). This allows rdiff-backup to provide snapshots of your system at almost any time without using a significant amount of disk space.

I tried rdiff-backup and it works great for Linux to Linux backups, but it gave me trouble when I tried to backup Linux to Mac OS X. Linux uses a case-sensitive filesystem (ext2). You can have the files foo and FOO in the same directory and Linux treats them as different files. Mac OS X’s file system is case-insensitive, so if you make a file FOO, it will overwrite a file foo in the same directory. rdiff-backup tries to take this into account by quoting any filename with capital letters in it. Unfortunately, the most recent version from CVS (as of 2004-01-01) doesn’t properly unquote them. Even if it did work properly, it ruins one of the main benefits of using rsync - having a true mirror of your system. Now instead of having a file README in your backup, you have a file ;082;069;065;068;077;069 (that’s README quoted). I can no longer just scp a directory from my Mac backup machine to my linux server. I have to use rdiff-backup to do the transfer so that it gets properly unquoted. I would prefer to be able to turn the quoting stuff off completely. If you’re crazy enough to have 2 files in a Linux directory with the same name, but just capitalized differently, you’re just asking for trouble.

I also realized that rdiff-backup is overkill for what I need. If I really want to be able to recover a file that I deleted 5 days ago, I should be using CVS for that data. My backups are there simply for unexpected data loss.

So, I’m currently using plain rsync over SSH. I do 2 things to prepare my system before running rsync. First, I run dpkg --get-selections to dump a list of my installed Debian packages. When recovering my system, I can feed that file to dpkg --set-selections and then run apt-get dselect-upgrade to get all my packages set up as I had before. So, I put this in my crontab:

 # save apt selections 00 20 * * *  /usr/bin/dpkg --get-selections > /home/vinod/apt-settings/dpkg-selections

The other thing I need to do is to dump my Postgres databasse. I currently do this from a scheduled procedure in AOLserver as documented at the bottom of the OpenACS 4.6.3 documentation.

Finally, we get to rsync. Like MarkD, I’m going to run rsync from my Mac and pull the data from the Linux server over to the Mac. Unlike MarkD, I’m going to run rsync as my root user so that I can preserve users and groups of my /etc files. Here’s my root crontab entry on my Mac (split into multiple lines for reading convenience):

00 21 * * * rsync -av --numeric-ids --delete --delete-excluded
   --exclude-from=/Users/vinod/backup/acorn_exclude -e ssh acorn:/
   /Users/vinod/backup/acorn/latest >> /Users/vinod/backup/backup.log

The 'v' option is for verbose, which I’ll turn off eventually once I know things are working well. The --numeric-ids option preserves the user and group ids as numeric ids rather than trying to use user/group names from my Mac. the --delete and --delete-excluded make sure that files deleted on the Linux server are also deleted in the backup. The --exclude-from reads /Users/vinod/backup/acorn_exclude and follows its instructions as to which files to backup. I’ll talk about that next. The -e option says to use SSH to transfer the data. Finally the last two options say to get the data from host acorn (set to vkurup.acornhosting.net in /etc/hosts) and rsync it to /Users/vinod/backup/acorn/latest. To see exactly what files rsync is excluding or including, add another -v option to rsync. Also useful is the -n option, which tells rsync to show you what it would do without actually transferring anything.

Here’s what my exclude file looks like:

- *~
+ /etc 
- /home/vinod/web/kurup/log 
+ /home 
+ /root 
+ /usr 
+ /usr/local 
- /usr/* 
- /var/lib/courier 
- /var/lib/postgres/data 
+ /var 
+ /var/lib 
- /var/* 
- /*

The rsync exclude syntax allows you to backup just the files you want. I’ll just discuss a couple examples, so you should see the manpage for more details. Line 1 says to ignore any emacs backup files. Line 3 says to ignore /home/vinod/web/kurup/log while Line 4 says to include everything else in the /home directory. Conversely, line 7 says to include /usr/local (and everything in it), while Line 8 says to ignore anything else in the /usr directory. Line 6 is important. If you didn’t include it, nothing in /usr/local would get included. Each filename is checked against this list, but in a recursive fashion. So if rsync is working on the file /usr/local/foo, it first looks for a pattern that matches /usr, then for a pattern that matches /usr/local, then finally for a pattern that matches /usr/local/foo. It uses the first pattern that matches. So, if you didn’t include line 6, the search for /usr would find line 8 and stop, thus excluding everything in /usr/local. Not what you want. I know that was confusing - recursion always makes my head spin…

OK, the final thing is to set up SSH properly. I basically set it up like MarkD did, but since I need to login as root, I wanted to add a few more constraints. You can set up SSH to restrict a specific key to only run a certain command. So if someone somehow gets access to my key, they’ll only be able to run rsync. If you don’t put the command restriction in, they’ll be able to login to your server as root. To add restraints to a key, add some statements to the beginning of the key in authorized_keys2. Here’s my /root/.ssh/authorized_keys2 (split onto multiple lines):

command="rsync --server --sender -vlogDtpr --delete-excluded --numeric-ids . /"  
ssh-rsa AAAAB....<snip> ...t8= root@Vinod-Kurups-Computer.local. 

To get the proper command to put in your command="" statement, run your rsync command with an unrestricted SSH key first (i.e. just copy the id_rsa.pub to authorized_keys2). When you run the rsync command on your Mac, do a ps auxw | grep rsync on the server machine. The command that you see listed is the command that goes in the key.

After reading MarkD’s article, the one thing I might change is to tar up the data on the Linux server that needs its user/group info preserved (/etc, /root, and /usr/local) and then store that on the server in my user’s home directory. I then wouldn’t have to rsync as root, so it would simplify things. The drawback is that anytime anything changed in any of those directories, I’d be backing up the entire directories again. Hopefully this would be mitigated by the fact that they wouldn’t change often and since they’re mostly text files, they compress well.

Comments from old site

Yeah, baby

Great minds do thing alike :-) ours too.

I added a pointer to this from my rsync backup page, since I like your concise description of tar and rsync, plus the other root-related issues. I found I haven't had to touch /etc since Cathy already set up networking, and all my junk is running from daemontools. Back when I was colocating my machine, I would have loved to learn that trick though. Rebuilding the /etc configs from scratch in a disaster recovery mode is a huge royal pain.

I wish I had set something like this up *years* ago. Entirely too easy, and it seems to work.

Mark Dalrymple 2004-01-04 19:57:05

HFS+ aware rsync

I'm setting this up on my LAN to do some nightly backups between Macs. It should be noted that the rsync included with Mac OS X isn't HFS+ aware. Not a big deal for backups from Linux, etc., but if you want to absolutely get everything on a Mac, probably worth it to download Rsyncx (check versiontracker.com) or if you are a darwinports user, you can do the following:

sudo port install hfsrsync

and use the "hfsrsync" command as you would the rsync command in the instructions above. Rsyncx actually moves the original rsync command to orig_rsync and places itself as the rsync command, so after downloading and installing it, you could use the instructions unaltered.

Walter McGinnis 2005-03-30 13:27:54

Thanks Walter

I'm currently using Carbon Copy Cloner to backup my mac to an external firewire, but I'll probably want to switch to hfsrsync at some point. Thanks for the pointer!

Vinod Kurup 2005-09-15 11:00:44

rsync exclude file syntax

Thanks for the explanation. Googling rsync exclude did not give me what I was looking for -- syntax for the exclude file. man rsync doesn't explain it very well either

Patrick Greenwood 2006-09-07 10:04:30

will this work

Hi there,

i am totally new to backup of filesystem and i am suppose to propose a backup strategy for a school assignment.

i have a solaris 9.0 run filesystem of 9TB

can i implement Rsync as the backup strategy? would it be sufficient? thanks for advice :)

Unregistered Visitor 2007-02-07 09:46:47

Dec 17, 2003 - 1 minute read - Comments - sysadmin openacs

I'm back!

This site was down for the past few days due to a SNAFU with acornhosting’s upstream provider. You’d think I’d have learned to have a backup strategy in place by now, but you’d be overestimating my intelligence. Luckily, Cathy was somehow able to negotiate with her failed upstream provider to get my (and other acornhosting customers') data back. After this episode, I promise to backup religiously and I’m even more sold on acornhosting as a solid web hosting provider.

Comments from old site

rsync rules(tm)

Cathy indeed went the extra 25 miles for me, recovering most of my junk. I've started setting up backups with rsync, and it's looking like it'll work pretty well. Short commands on my home Mac like:

rsync -a -e ssh markd@borkware.com:/home/markd/ home
rsync -a -e ssh markd@borkware.com:/usr/local/cvsroot cvs

is all that's necessary to update the backup of my home and cvs respository. Now to get the pg_dumpall and this thing into cron on the appropriate machine, and life should be good.

Mark Dalrymple 2003-12-30 10:12:17

Now dogumented

Cheesy Web backups using Rsync

Mark Dalrymple 2003-12-31 14:32:27