Monday, January 3, 2011

Realtime passenger statistics

run this in a bash console:
watch -n1 /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.15/bin/passenger-status

Thursday, August 12, 2010

using curl to test rails app session management

I am building a rails app that communicates json back and forth to a client. I needed to test the session creation mechanisms and also some posting of objects. Here are some handy curl calls:

Post a json object { "FishID":"greatwhite" } while also passing the Content-Type and Accept headers as "application/json" so rails will autmatically turn the body of the request into params in the params[] array the below would be translated into [:FishID=>"greatwhite"]

curl -H "Content-Type:application/json" -H "Accept:application/json" -d "{\"FishID\":\"greatwhite\"}" http://74.116.250.34/createFish


Post a token to a session start action that will return the session id to the client then save the cookies that are sent back to a file that can be re-used in subsequent curl calls thus simulating a longer running session

curl -H "Content-Type:application/json" -H "Accept:application/json" -d "{\"token\":\"ja9er4fn9\"}" -c cookies.txt http://localhost/beginSession

Now put the two together:

Re-use a text file with cookies saved by a previous curl session to re-use the old session just use the -b switch again
curl -H "Content-Type:application/json" -H "Accept:application/json" -d "{\"FishID\":\"greatwhite\"}" -b cookies.txt http://localhost/createFish

That last call should be able to instantiate a fish for the account of the user based on the session from the session cookie.

Wednesday, July 14, 2010

A quick and dirty bash script to periodically clean out a static html mirror of a dynamic site

I use nginx to flatten my site to html buffers on disk just in case varnish ever crashes i can warm up the cache with the last known good coppies of stuff. However to make sure that we don't constantly serve old pages i move the files from a "fresh" folder to a "stale" folder every so often. I do some tricky stuff in my nginx config to check the upstream server first. If that fails (is overloaded) then check the "fresh" folder and THEN if worst comes to worst check the "stale" server and serve from there. At least the user doesn't see an error page. Just slightly older content.

Anyway I will post the nginx config soon. for now here the new pure bash version of the cleanup script. I posted a perl version a little while back. I think this version is faster.

#!/bin/bash
MINS=30 #the age threshold in minutes at which point the file is moved to stale
DIR=/cache/fresh #starting directory
NEWDIR=/cache/stale #directory to move to

cd $DIR
for file in `find . -type f -mmin +$MINS`
do

#we need to get the directory name holding our file and remove the leading . so that it is just /foo/bar
backup_dir=$(dirname $file| sed 's/^\.\///')

#we don't actually need to use the basename of the file unless we are going to move it to a different directory inside the stale directory than it was inside the fresh directory
#file_name=$(basename $file)

# the following just makes sure to skip the . directory if the find command picks it up
dot="."
if [ $backup_dir = $dot ] ; then
backup_dir=""
fi

#lets check to see if the supposed new directory is already in the stale folder as a "file" rather than a directory. sometimes when using pretty urls or a combination of pretty urls and non pretty urls we can end up with files that should be directories. just because a files ends in .php on disk doesnt mean that it doesnt have virtual "subdirectories" under it when viewed via the web ex: /page.php/1/ which in our flat html version should be /page.php/1/index.html so lets delete the file and replace it with a directory.
if [ -f $NEWDIR/$backup_dir ] ; then
rm -f $NEWDIR/$backup_dir
fi
mkdir -p $NEWDIR/$backup_dir

#if we were moving our stale cache to another server we could create the directory by issuing remote ssh commands ex
#ssh testaccount@192.168.10.15 mkdir -p $DIR/$backup_dir

#mv $file $NEWDIR/$backup_dir #mv is coughing on files with spaces in the name. I could spend the time to regex escape the special characters in the filename but why bother when rsync works just fine
rsync --stats -auvz --remove-sent-files --times -og $file $NEWDIR/$backup_dir
rm -f $file
done

Thursday, April 1, 2010

bash oneliner to clear out apache httpd semaphores

If your seeing the following error message in your apache error logs

[emerg] (28)No space left on device: Couldn't create accept lock

You probably need to clear out some stale httpd semaphores. The following oneliner will do that for you.

ipcs -s | grep apache | perl -e 'while () { @a=split(/\s+/); print `ipcrm sem $a[1]`}'

Friday, March 26, 2010

log log log

Log stand and error outs to syslog and prefix it

#!/bin/bash
/usr/local/bin/something 2>&1 | logger -p daemon.notice -t ${0##*/}[$$]

Saturday, February 13, 2010

Reload nginx config

First make your changes to nginx.conf.

Then run the following command to test the new configuration:

# nginx -t -c /etc/nginx/nginx.conf
2007/10/18 20:55:07 [info] 3125#0: the configuration file /etc/nginx/nginx.conf syntax is ok
2007/10/18 20:55:07 [info] 3125#0: the configuration file /etc/nginx/nginx.conf was tested successfully
Next, look for the process id of the master nginx process:

# ps -ef|grep nginx
root 1911 1 0 18:00 ? 00:00:00 nginx: master process /usr/sbin/nginx
www-data 1912 1911 0 18:00 ? 00:00:00 nginx: worker process
Lastly, tell nginx to reload the configuration and restart the worker processes:

# kill -HUP 1911

taken from:
http://snippets.aktagon.com/snippets/93-Change-nginx-configuration-on-the-fly

Monday, February 1, 2010

bash one liner (essentially) to loop through list of files and upload using rsync

for f in `cat file-includes.txt`; do rsync -avz $f user@server.ip:/base-path/$f; done

you could also do the -e 'ssh -i /path/to/preshared/key' to avoid the rsync password prompt on each connect

for f in `cat file-includes.txt`; do rsync -avz -e 'ssh -i /path/to/preshared/key' $f user@server.ip:/base-path/$f; done

I find that this works better for only syncing a short list of files than trying to use the rsync switch --include-from=/path/to/file-includes.txt along with the --exclude-from=/path/to/file-excludes.txt or the --exclude=*

for some reason i couldnt get it to exclude everything BUT what was in my includes-from txt file.


Here is an even more expanded version that creates the white list of files for you containing all visible files in the current directory. You will probably want to modify it so that it only includes a subset of that. Otherwise it is pretty pointless (because it just does what rsync normally does uploads all changed files). You can hand create your white list of files to upload or use svn commit messages even.

ls -1 ./ > ./file-includes.txt; for file in `cat ./file-includes.txt`; do rsync -avz -e 'ssh -i /path/to/preshared/key' $( readlink -f "$( dirname "$file" )" )/$( basename "$file" ) user@server.ip:/base-path/$( readlink -f "$( dirname "$file" )" )/$( basename "$file" ); done; rm ./file-includes.txt