Tuesday, December 22, 2009
How to execute remote bash commands using ssh
ssh -t YOURHOST "bash --rcfile PATH_TO_RCFILE_ON_REMOTE_HOST_HOME_DIR_SHORTCUTS_WORK_FOR_AUTHED_USER"
bash will be executed on the remote host, and bash will execute the specfied RCFILE at startup, and connection will remain open. -t is to have the current terminal forwarded to the ssh session so that you have a real terminal.
You can have some variant on the same kind, still use ssh -t. Like if screen is installed, you can do:
ssh -t YOURHOST screen
..."
Alternative:
"...
ssh YOURHOST bash --rcfile YOUR_RC_FILE -i
But then you don't have a real terminal, and some stuff will not work correctly (like tab auto-completion).
..."
from:
http://www.linuxforums.org/forum/linux-networking/102713-how-execute-remote-shell-commands-via-ssh.html
Sunday, December 6, 2009
my current bash_profile
export EC2_HOME=~/.ec2/ec2-api-current
export ELB_HOME=~/.ec2/ec2-ElasticLoadBalancing-current
export M2_HOME=~/apache-maven-2.2.0
export M2=$M2_HOME/bin
export EC2_CERT=~/.ec2/cert-R4SBVFO3FBH7TLS27NS7GEL5FG345ZBJ.pem
export EC2_PRIVATE_KEY=~/.ec2/pk-R4SBVFO3FBH7TLS27NS7GEL5FG345ZBJ.pem
export AWS_X509_CERT=~/.ec2/jessesanford.pem
export JAVA_HOME=/Library/Java/Home
export EDITOR=/usr/bin/vim
export PATH=/opt/local/bin:/opt/local/sbin:$PATH
export PATH=/opt/local/apache2/bin:/opt/local/subversion/bin:$PATH
export PATH=~/zero:~/apache-maven-2.2.0/bin:/usr/local/zend/share/ZendFramework/bin:$PATH
export PATH=${PATH}:~/.ec2/ec2-api-current/bin:~/.ec2/ec2-ami-current/bin:~/.ec2/ec2-ElasticLoadBalancing-current/bin:~/.ec2/ec2-CloudWatch-current/bin:~/.ec2/ec2-AutoScaling-current/bin
test -r /sw/bin/init.sh && . /sw/bin/init.sh
alias mysqlstart='sudo /opt/local/bin/mysqld_safe5 &'
alias mysqlstop='/opt/local/bin/mysqladmin5 -u root -p shutdown'
alias apachestart='sudo /opt/local/apache2/bin/apachectl start'
alias apachestop='sudo /opt/local/apache2/bin/apachectl stop'
alias apacherestart='sudo /opt/local/apache2/bin/apachectl restart'
if [ -f /opt/local/etc/bash_completion ]; then
. /opt/local/etc/bash_completion
fi
Wednesday, November 25, 2009
iterating through lines in a file in bash
#!/bin/bash
cat filename | while read line; do
echo $line
done
another way:
#!/bin/bash
IFS=$'\n'
for line in $(cat filename); do
echo $line
done
Tuesday, October 6, 2009
article on symfony svn externals
http://echodittolabs.org/blog/2009/09/symfony-and-svnexternals-super-slick-easy-way
pressflow a drupal branch for scaling
http://fourkitchens.com/pressflow-makes-drupal-scale/downloads
//database.inc
function db_query($query) {
$args = func_get_args();
array_shift($args);
$query = db_prefix_tables($query);
if (isset($args[0]) and is_array($args[0])) { // 'All arguments in one array' syntax
$args = $args[0];
}
_db_query_callback($args, TRUE);
$query = preg_replace_callback(DB_QUERY_REGEXP, '_db_query_callback', $query);
//load balancing
if(strpos(strtolower($_GET['q']),"admin") !== false)
db_set_active('write'); //its important that all admin gets access to the most recent data
else
if(strpos(strtolower($query),"select") === 0){
db_set_active('read'); //this will not contain any data from the master (write) database untill replication happens
}
else {
db_set_active('write');
}
return _db_query($query);
}
//sites/default/settings.php
//$db_url = 'mysql://username:password@localhost/databasename';
$db_url = array(
'default' => 'mysql://username:password@localhost/databasename',
'read' => 'mysql://username:password@localhost/databasename',
'write' =>'mysql://username:password@localhost/databasename'
);
?>
I got that patch from the following post on drupal.org:
http://groups.drupal.org/node/2147
2bits posts a lot on drupal performance
http://2bits.com/articles/drupal-performance-tuning-and-optimization-for-large-web-sites.html
article on performace monitoring lamp stack
http://2bits.com/articles/tools-for-performance-tuning-and-optimization.html
Drupal Uploading files to folder outside of the webroot
Kind of step by step:
http://www.vmtllc.com/drupal-as-an-intranet
Description of drupal public/private filesystem
http://drupal.org/node/230984
Ebook on files in drupal:
http://11heavens.com/files-in-Drupal
List of all file upload modules
http://groups.drupal.org/node/20291
using nginx as reverse proxy with http-acceleration (caching)
Proxy_cache methods (most like varnish):
http://www.ruby-forum.com/topic/183590
Nginx docs for proxy_cache:
http://wiki.nginx.org/NginxHttpProxyModule#proxy_cache
Proxy_store method:
Another older type of nginx caching (well its not really caching more like mirroring flat file copies of content) that just stores a copy of the requested uri's output from the upstream server to disk and then serves that copy indefinitley as long as it exists: (great for sites with content that does not change much)
http://lucasforge.2bopen.org/2009/09/caching-dynamic-content-using-nginx/
More info on using something like the above but this post includes a cron script to delete files from the cache after a certain length of time.
http://mark.ossdl.de/2009/07/nginx-to-create-static-files-from-dynamic-content/
The above 2 posts could be combined with a post like this (on nginx and memcachce) to store the repsonses in memcache rather than on disk:
http://www.igvita.com/2008/02/11/nginx-and-memcached-a-400-boost/
The question would be how to create propper key's into the memcache. I think that you could crc the output from the upstream servers and use that concatenated to the full uri for the object as the key. That way when the content changes the crc will change thus invalidating the old key and the old content that was stored with the old key.
Here are some posts about how to create keys for memcached content that changes and invalidating stale content:
http://blog.leetsoft.com/2007/5/22/the-secret-to-memcached
http://nubyonrails.com/articles/about-this-blog-memcached
Nginx docs for proxy_store:
http://wiki.nginx.org/NginxHttpProxyModule#proxy_store
Alternate caching plugin for nginx:
This project is interesting and would be awesome but looks too experimental and I don't read chinese :(
http://code.google.com/p/ncache/
Tuesday, September 29, 2009
depending on svn externals? then piston is for you
http://piston.rubyforge.org/
there is even a post from this guy:
http://jfcouture.com/2007/12/12/the-guerilla-guide-to-git-how-to-start-using-git-even-if-your-team-is-on-svn/
who is using it to allow for svn externals in his git repo! so maybe you can intermix svn externals and git submodules?
attempting to fix svn (subversion) repo corruption using fsverify
start by running the svnadmin verify command on your repo
svnadmin verify /Volumes/DATA/SubVersion/projectname
it will run through each and every commit starting at 0 through HEAD
* Verified revision 716.
* Verified revision 717.
* Verified revision 718.
* Verified revision 719.
* Verified revision 720.
* Verified revision 721.
svnadmin: Decompression of svndiff data failed
and possibly get cut off at a certain commmit with an error message (may or may not be like the above)
If and when this happens the best thing to do is to immediately restore your latest backup.
If somehow the latest backup is old and not up to date you have a few options. The best option would be to restore the latest backup and then choose whether or not you need worry about having record of any of the commits between the time that backup was created and the current HEAD. If you dont need any of the interim commit history you can just restore the backup and then do an svn check in to get everything back up to date.
If you do need some of that history you should start by making a backup of the corrupted repository.
Then try and repair the corruption (don't get your hopes up. sometimes it is possible. most times it is not)
The tool we will use to try and repair the repo is called fsfsverify and it is used in particular for readlength errors which subversion is surprisingly susceptible to.
read: http://www.szakmeister.net/fsfsverify/
grab: http://www.szakmeister.net/fsfsverify.tar.gz
so using the output from the svnadmin verify command above we expect our corruption to be in or around revision 722. we start by checking everything up to that point with fsfsverify:
./fsfsverify/fsfsverify.py -f /Volumes/DATA/SubVersion/projectname/db/revs/722
in the above we run fsfsverify.py with the -f (fix) option up to the revision in question at 722 we expect corruption... you may want to run it all the way to the latest revision or HEAD if you don't know what commit is the corrupted commit. NOTE USING FSFSVERIFY with -f MAY CAUSE MORE CORRUPTION you can just scan for corruption without the -f switch. BE SURE TO BACKUP YOUR REPO.. EVEN YOUR COURRPUTED REPO BEFORE USING THE -f switch.
here are some errors you might see if your repo IS corrupted
Traceback (most recent call last):
File "./fsfsverify/fsfsverify.py", line 1120, in
for noderev in strategy:
File "./fsfsverify/fsfsverify.py", line 839, in _nodeWalker
for x in self._nodeWalker():
File "./fsfsverify/fsfsverify.py", line 839, in _nodeWalker
for x in self._nodeWalker():
File "./fsfsverify/fsfsverify.py", line 832, in _nodeWalker
noderev = NodeRev(self.f, self.currentRev)
File "./fsfsverify/fsfsverify.py", line 723, in __init__
self.dir = getDirHash(f)
File "./fsfsverify/fsfsverify.py", line 492, in getDirHash
raise ValueError, "Expected a PLAIN representation (%d)" % f.tell()
ValueError: Expected a PLAIN representation (14899)
If it think's (think being the keyword... it might not make things better at all) it fixed things it might show:
NodeRev Id: 4bn.0.r723/45401
type: file
text: DELTA 723 3991 907 2209 33d818571849f2eb34a7d872be1a5639
cpath: /lib/filter/doctrine/base/BasePageTemplateMapFormFilter.class.php
copyroot: 0 /
NodeRev Id: 4bm.0.r723/45587
type: file
text: DELTA 723 17170 813 1909 faefab79ab1c9b61b8c7ae9297b97127
cpath: /lib/filter/doctrine/base/BaseModulePageFormFilter.class.php
copyroot: 0 /
Copy 7 bytes from offset 17744
Write 7 bytes at offset 17190
Fixed? :-) Re-run fsfsverify without the -f option
it is possible that it fixed your issue but it is likely that it did not. you can check this by running the svnadmin verify command again:
svnadmin verify /Volumes/DATA/SubVersion/projectname
again it will run through each and every commit starting at 0 through HEAD
* Verified revision 716.
* Verified revision 717.
* Verified revision 718.
* Verified revision 719.
* Verified revision 720.
* Verified revision 721.
svnadmin: Decompression of svndiff data failed
again we see we have the same issue. SO it's time to give up on keeping that revision. it's corrupted and we don't have a backup of it so we cut our losses. CUT being the keyword as we are literally going to slice that bad revision using some nimble svndumps of all the commits around it and then merging them all back together without that corrupted revision in them. then we load the new merged dump of all the good commits back into a NEW repo and start over (only after backing up the new repo of course. you should have been doing more backups and never had to deal with this in the first place!) read my previous few blog posts for how to use svndump to slice the corrupted repo into pieces and then bring them back together without corruption.
dealing with subversion corruption: dumping good segments of the repo while slicing out corrupted portions
in the above example the corrupted commit is 722 and 723 so we dump everything up to 722 into one file and then:
svnadmin dump ./usmagazine/ --incremental -r 724:724 > usmagazine_r724.dump
everything from 724 with the --incremental switch
and then everything from 726 to head (725 was also corrupted) again with the --incremental switch:
svnadmin dump ./usmagazine/ --incremental -r 726:HEAD > usmagazine_r726.dump
dealing with subversion corruption: merge two svndump (subversion dump) flat files using svndumptool
read:http://svn.borg.ch/svndumptool/
start by checking the dumps you made (with svnadmin) for validity (make sure you didn't include the corrupted commits in your dump)
$ python ./svndumptool-0.5.0/svndumptool.py check -A projectnamefull.dump
Checking file projectnamefull.dump
Traceback (most recent call last):
File "./svndumptool-0.5.0/svndumptool.py", line 116, in
sys.exit( func( appname, args ) )
File "/Volumes/DATA/Staff/jessesanford/svndumptool-0.5.0/svndump/tools.py", line 523, in svndump_check_cmdline
if check.execute( filename ) != 0:
File "/Volumes/DATA/Staff/jessesanford/svndumptool-0.5.0/svndump/tools.py", line 241, in execute
while dump.read_next_rev():
File "/Volumes/DATA/Staff/jessesanford/svndumptool-0.5.0/svndump/file.py", line 474, in read_next_rev
self.__skip_empty_line()
File "/Volumes/DATA/Staff/jessesanford/svndumptool-0.5.0/svndump/file.py", line 132, in __skip_empty_line
raise SvnDumpException, "expected empty line, found '%s'" % line
svndump.common.SvnDumpException: expected empty line, found ''
the above dump DID include corruption as you can see the python script bombs... what you should see is:
$ python ./svndumptool-0.5.0/svndumptool.py check -A projectname_r0to721.dump
Checking file projectname_r0to721.dump
OK
Once you are satisfied that all your dump files are clean (corruption free) and ready to be merged back into one run:
$python ./svndumptool-0.5.0/svndumptool.py merge -iprojectname_r0to721.dump -iprojectname_r724.dump -iprojectname_r726.dump -oprojectname_merged.dump
you can see that this merges dump files for a few different segments of revisions. in the above example revision 722,723,725 were corupted so i had to slice the repo into 3 different sections and then merge them all back together to minimize the loss of version history.
load data from svndump flatfile into new repo
check your svn (subversion) repo for corruption
modify subversion commit message for particular commit
there are a few ways to do this. the following requires you have shell access to the server with the repo on it.
svnadmin setlog /Volumes/DATA/SubVersion/projectname/ -r 2297 ./tmppropmessage.txt --bypass-hooks
you have to do this on the local machine with the subversion repo available via a normal filesystem path. you have to pass the message in as text file. strange that you cant pass a -m "message" param.
$ cat tmppropmessage.txt
refactored the blah blah blah commit message goes here.
NOTE the --bypass-hooks option should be used with care. there are sometimes things in the pre-revprop-change hook script that are important (like emailing an administrator letting them know that you changed a property)
if you don't pass the --bypass-hooks option you may have to deal with whatever logic is in the /path/to/repo/projectname/hooks/pre-revprop-change.tmpl
here is an example of that hook (note i trimmed out all the comments):
$ cat ./hooks/pre-revprop-change.tmpl
REPOS="$1"
REV="$2"
USER="$3"
PROPNAME="$4"
ACTION="$5"
if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
echo "Changing revision properties other than svn:log is prohibited" >&2
exit 1
as you can see the above logic makes sure the user is not trying to change properties OTHER than the log. so changing our log message would have worked without the --bypass-hooks switch in this instance.
dumping svn (subversion) repositories to flat file for backup.
Wednesday, September 23, 2009
bash script to push (if trigger is in place) an application directory and the corresponding app webroot and the app db with excludes lists for all 3
Note the triggers in the below script are sort of backwards. really you should write the trigger to the file system via some web accessible script. then when the trigger is available if the below script is cronned it will realize the trigger is there then sync everything and then delete the trigger when its done... then the script will wait for someone to "pull the trigger" and write out another trigger file to cause the script to sync again.
[root@usweekly-qa-app jsanford]# cat /usr/local/bin/pushWebroot.sh
#!/bin/bash
## This script is using key-based authentication ##
##################################
##### User-editable Variables ####
##################################
source_path=/usr/local/apache2/htdocs
target_path=/usr/local/apache2/htdocs
webexcludes=/usr/local/apache2/htdocs/config/rsync_web_excludes.txt
htdocsexcludes=/usr/local/apache2/htdocs/config/rsync_htdocs_excludes.txt
#excludes=''
user=rsync_username
privatekey=/path/to/private/key/for/above/rsync/username_dsa
testing='0'
servers='
ip.for.first.slave.app.server
ip.for.second.slave.app.server'
dbmaster='ip.for.db.master'
dbslave='ip.for.db.slave'
database='databasename'
dbmasteruser='databaseuser'
dbmasterpass='databasepassword'
dbslaveuser='slavedbuser'
dbslavepass='slavedbpassword'
triggerfiles=`ls $source_path/web/trigger/`
dbexcludes='comma,delimited,list,of,table,names,to,exclude,make,sure,to,preface,them,with,the,database,name'
################################################
##### Do not edit anything below this line! ####
################################################
################################################
######### Determining Sync Settings ############
################################################
if [ ! -e $source_path/web/trigger/$triggerfile ]; then
## If the trigger file doesn't exist, script exits ##
echo "There is no trigger file at $source_path/web/trigger/$triggerfile"
echo "No content will be published"
exit
else
source=$source_path/
target=$target_path/
websync='0'
htdocssync='0'
for triggerfile in $triggerfiles; do
if [ "$triggerfile" == "htdocs" ]; then
echo "We're syncing all content under htdocs"
htdocssync='1'
elif [ "$triggerfile" == "web" ]; then
echo "We're syncing htdocs/web root only"
websync='1'
websource=$source_path/web/
webtarget=$target_path/web/
elif [ "$triggerfile" == "db" ]; then
echo "We're syncing the database"
dbsync='1'
else
echo "nothing was specified or an error occurred"
htdocssync='0'
websync='0'
dbsync='0'
exit
fi
done
echo "sources: $source"
echo "targets: $target"
echo "websources: $websource"
echo "webtargets: $webtarget"
echo "Databases Sync=$dbsync"
echo "Web Sync=$websync"
echo "Htdocs Sync=$htdocssync"
##################################
### Determining Testing Mode #####
##################################
if [ "$testing" = "0" ]; then
dryrun=""
echo "#######################################"
echo "### We are NOT running in test mode ###"
echo "### Content will be replicated ########"
echo "#######################################"
else
dryrun="--dry-run"
echo "##########################################"
echo "### We are running in test mode ##########"
echo "### no content will be replicated ########"
echo "##########################################"
fi
echo $dryrun
##################################
### Defining Sync Sources ########
##################################
if [ "$htdocssync" = "1" ]; then
for server in $servers; do
echo "Starting content push to $server at `date`"
#echo "$server"
## See if servers are there and accepting connections ##
#echo "Hello $server"
#ssh root@$server hostname
echo "`date`"
echo "syncronizing $source to $server:$target"
/usr/bin/rsync -avzC --force --delete --progress --stats $dryrun --exclude-from=$htdocsexcludes -e "ssh -ax -i $privatekey" $source $user@$server:$target
done
else
echo "No static htdocs Content will be synced"
fi
##################################
### Defining web Sync Sources ##
##################################
if [ "$websync" = "1" ]; then
for server in $servers; do
echo "Starting content push to $server at `date`"
#echo "$server"
## See if servers are there and accepting connections ##
#echo "Hello $server"
#ssh root@$server hostname
echo "`date`"
echo "syncronizing $websource to $server:$webtarget"
/usr/bin/rsync -avzC --force --delete --progress --stats $dryrun --exclude-from=$webexcludes -e "ssh -ax -i $privatekey" $websource $user@$server:$webtarget
done
else
echo "No static web Content will be synced"
fi
##################################
### Database Sync ################
##################################
if [ "$dbsync" = "1" ]; then
echo "Replicating mysql database from $dbmaster to $dbslave"
####################################################
### We are using mk-table-sync instead of SQL Yog ##
### Uncomment one of the two lines below only ######
####################################################
mk-table-sync --execute $dryrun --verbose --ignore-tables $dbexcludes --databases $database u=$dbmasteruser,p=$dbmasterpass,h=$dbmaster u=$dbslaveuser,p=$dbslavepass,h=$dbslave
#/root/sqlyog/sja /root/sqlyog/usmagazine_prod.xml
else
continue
fi
echo "Finishing content push at `date`"
## echo "Re-setting sync cycle:"
## echo "touch $source_path/web/trigger/$triggerfile"
## touch $source_path/web/trigger/$triggerfile
fi
exit 0
use cron and bash script rsync to synchronize two webroots (or any folder for that matter)
#!/bin/bash
echo syncing everything in webroot to www1 webroot
rsync -avzC --force --progress -e "ssh -i /keys/cron_dsa" --exclude-from=/usr/local/bin/scripts/rsync_excludes.txt /www/ username@slave_server_ip:/www/
echo done!
use sqlyog job agent to synchronize two mysql databases
Here is a link to the download page for the sqlyog job agent (SJA): http://www.webyog.com/en/downloads.php#sqlyog
Here is the xml file for the master to slave push job: (note the xml below has html encoded < and > characters so you might not be able to just copy and paste it)
[root@cms ]# cat scripts/sync_cms_db_to_www1_db.xml
<version="6.5">
<syncjob>
<abortonerror abort="no">
<fkcheck check="no">
<twowaysync twoway="no">
<host>localhost</host>
<user>username</user>
<pwd>password</pwd>
<port>3306</port>
<ssl>0</ssl>
<sslauth>0</sslauth>
<clientkey>
<clientcert>
<cacert>
<cipher>
<charset>
<database>databasename</database>
<target>
<host>slave_server_ip</host>
<user>username</user>
<pwd>password</pwd>
<port>3306</port>
<ssl>0</ssl>
<sslauth>0</sslauth>
<clientkey>
<clientcert>
<cacert>
<cipher>
<charset>
<database>databasename</database>
</charset></cipher></cacert></clientcert></clientkey></target>
<tables all="yes">
</tables></charset></cipher></cacert></clientcert></clientkey></twowaysync></fkcheck></abortonerror></syncjob>
Here is the bash script that runs the above job xml:
[root@cms ]# cat scripts/sync_cms_dbs_to_www1_dbs.sh
#!/bin/bash
echo Syncing cms dbs to www1 dbs...
/usr/local/bin/scripts/sja "/usr/local/bin/scripts/sync_cms_db_to_www1_db.xml" -l"/var/log/databasename_db_cms_to_www1_sync_log.txt" -s"/var/log/databasename_db_cms_to_www1_sync_session.xml"
echo Done!
bash script to tar gzip backup apache webroot (or any folder) with timestamp
#!/bin/bash
echo taring webroot
stamp=$(date --utc --date "$1" +%F)
tar -czf /backups/sitename_webroot_$stamp.tgz /path_to_webroot/*
echo done
backup script for mysql
[root@cms]# vi scripts/backup_db.sh
#!/bin/bash
echo "dumping db"
stamp=$(date -u --date "$1" +%F)
mysqldump -u username --password=password databasename > /backups/databasename_db_backup_$stamp.sql
Monday, September 21, 2009
bash script to install yum on centos
[root@cms ~]# vi yum-rpm-install.sh
for file in \
gmp-4.1.4-10.el5.i386.rpm \
readline-5.1-1.1.i386.rpm \
python-2.4.3-19.el5.i386.rpm \
libxml2-2.6.26-2.1.2.i386.rpm \
libxml2-python-2.6.26-2.1.2.i386.rpm \
expat-1.95.8-8.2.1.i386.rpm \
python-elementtree-1.2.6-5.i386.rpm \
sqlite-3.3.6-2.i386.rpm \
python-sqlite-1.1.7-1.2.1.i386.rpm \
elfutils-0.125-3.el5.i386.rpm \
rpm-python-4.4.2-47.el5.i386.rpm \
m2crypto-0.16-6.el5.1.i386.rpm \
python-urlgrabber-3.1.0-2.noarch.rpm \
yum-metadata-parser-1.0-8.fc6.i386.rpm \
yum-3.0.5-1.el5.centos.5.noarch.rpm
do rpm -Uvh http://mirror.centos.org/centos-5/5.1/os/i386/CentOS/$file;
done
Hack cron bash script to keep specific URLS in the cache
[root@cms ~]# vi /root/touch_urls-www1.sh
#!/bin/bash
for i in `cat URLs.conf`
do curl -H "Host: www.refinery29.com" http://127.0.0.1$i -s >> /dev/null
done
Here is the URLS.conf:
[root@cms ~]# vi /root/URLs.conf
/index.php
/about.php
/contact.php
Thursday, September 10, 2009
One for Anthony: how to delete from the svn server while maintaining your local copy
svn delete /path/to/file/name/in/your/working/copy --keep-local
svn ci /path/to/file/name/in/your/working/copy -m "removing files i shouldn't have checked in cause I'm a dummy"
Wednesday, September 9, 2009
Change subversion log messages without pre-revprop-change hook script installed
svn admin setlog /Volumes/DATA/SubVersion/reponame/ -r 2297 (revision number) ./textfilecontainingmessage.txt --bypass-hook
the --bypass-hook being the switch that will allow you to get passed this message:
svnadmin: Repository has not been enabled to accept revision propchanges;
Friday, August 28, 2009
Buzz words
SoC (Separation of Concerns) is my first one.
In a nutshell: no one part or layer of your stack should care about the other parts or layers or how they work. This does not need to imply separate physical or logical layers. it could pertinent only to the application layer itself like with many software engineering patterns such as MVC... there we go... another buzz word... Ill put that one in my next post.
Sunday, August 23, 2009
php 5.2.10 pecl problem: pear.php.net is using a unsupported protocal - This should never happen. install failed
vi /usr/local/lib/php/.channels/pear.php.net.reg #(you may have to create the channels directory)
paste the following in:
a:6:{s:7:"attribs";a:4:{s:7:"version";s:3:"1.0";s:5:"xmlns";s:31:"http://pear.php.net/channel-1.0";s:9:"xmlns:xsi";s:41:"http://www.w3.org/2001/XMLSchema-instance";s:18:"xsi:schemaLocation";s:71:"http://pear.php.net/channel-1.0 http://pear.php.net/dtd/channel-1.0.xsd";}s:4:"name";s:12:"pear.php.net";s:14:"suggestedalias";s:4:"pear";s:7:"summary";s:40:"PHP Extension and Application Repository";s:7:"servers";a:2:{s:7:"primary";a:1:{s:4:"rest";a:1:{s:7:"baseurl";a:4:{i:0;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.0";}s:8:"_content";s:25:"http://pear.php.net/rest/";}i:1;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.1";}s:8:"_content";s:25:"http://pear.php.net/rest/";}i:2;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.2";}s:8:"_content";s:25:"http://pear.php.net/rest/";}i:3;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.3";}s:8:"_content";s:25:"http://pear.php.net/rest/";}}}}s:6:"mirror";a:2:{i:0;a:2:{s:7:"attribs";a:1:{s:4:"host";s:15:"us.pear.php.net";}s:4:"rest";a:1:{s:7:"baseurl";a:4:{i:0;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.0";}s:8:"_content";s:28:"http://us.pear.php.net/rest/";}i:1;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.1";}s:8:"_content";s:28:"http://us.pear.php.net/rest/";}i:2;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.2";}s:8:"_content";s:28:"http://us.pear.php.net/rest/";}i:3;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.3";}s:8:"_content";s:28:"http://us.pear.php.net/rest/";}}}}i:1;a:2:{s:7:"attribs";a:3:{s:4:"host";s:15:"de.pear.php.net";s:3:"ssl";s:3:"yes";s:4:"port";s:4:"3452";}s:4:"rest";a:1:{s:7:"baseurl";a:4:{i:0;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.0";}s:8:"_content";s:34:"https://de.pear.php.net:3452/rest/";}i:1;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.1";}s:8:"_content";s:34:"https://de.pear.php.net:3452/rest/";}i:2;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.2";}s:8:"_content";s:34:"https://de.pear.php.net:3452/rest/";}i:3;a:2:{s:7:"attribs";a:1:{s:4:"type";s:7:"REST1.3";}s:8:"_content";s:34:"https://de.pear.php.net:3452/rest/";}}}}}}s:13:"_lastmodified";a:2:{s:4:"ETag";s:20:""2fe96-59a-31a3fc80"";s:13:"Last-Modified";s:29:"Tue, 02 Jun 2009 05:55:46 GMT";}}
then run:
pear channel-update pecl.php.net
and then you can run something like:
pecl install pdo pdo_mysql
Now you shouldn't see this anymore:
pear.php.net is using a unsupported protocal - This should never happen.
install failed
Adding Shared Objects to php and apache
Here I add pdo, pdo_mysql and mod_rewrite:
sudo pecl install pdo pdo_mysql #(will compile and add compatible pdo.so and pdo_mysql.so modules to your php libs you will have to enable it in your php.ini)
(from within the root of a new copy of the apache src tree type the following (a new src tree for apache is needd because the first time apache compiled it might have cleaned out the c headers and source files for the modules that weren't included in the .configure)
/usr/local/apache2/bin/apxs -iac modules/mappers/mod_rewrite.c #(will compile and enable for you in the httpd.conf the mod_rewrite shared object)
Thursday, August 20, 2009
bash shell one liner for denying access to a specific user to all svn repos in a directory.
Tuesday, August 11, 2009
Google Chrome for Mac OS X with Flash
Tuesday, August 4, 2009
Saturday, August 1, 2009
Hack VMware fusion to allow for the installation of mac osx
sudo bash
cd “/Library/Application Support/VMware Fusion/isoimages”
mkdir original
mv darwin.iso tools-key.pub *.sig original
perl -n -p -e ’s/ServerVersion.plist/SystemVersion.plist/g’ < original/darwin.iso > darwin.iso
openssl genrsa -out tools-priv.pem 2048
openssl rsa -in tools-priv.pem -pubout -out tools-key.pub
openssl dgst -sha1 -sign tools-priv.pem < darwin.iso > darwin.iso.sig
for A in *.iso ; do openssl dgst -sha1 -sign tools-priv.pem < $A > $A.sig ; done
exit
Monday, July 13, 2009
basic deployment script using rsync
#!/usr/bin/env sh
user_name=devuser
export user_name
private_key=config/id_dsa
export private_key
remote_host=192.168.3.111
export remote_host
local_path=./
export local_path
remote_path=/var/www/vhosts/usmagazine
export remote_path
exclude_file=config/rsync_exclude.txt
export exclude_file
# -e in the rsync command forces rysnc to use ssh as the transport protocol and then
# it passes -ax to the ssh command to disable interactive shell and x11 on the server
# -C in the rsync command causes rsync to ignore the subversion and cvs folders in the
# directory tree
# -a in the rysnc command puts rsync in archive mode recursing all folders and
# preserving users symlinks permissions and timestamps
# -z turns on rsync compression
# --delete will delete any files from the remote file system that don't exist on the
# local file system
# --force forcibly answers yes to any prompts for confirmation from rsync
# --exclude-from passes in a file that contains patterns (1 per line) that match files
# using rsync's pattern matching syntax (* for wildcard etc.) a line preceded with a +
# tells rsync to include any files matching the pattern and a line preceded by - tells
# rsync to ignore files matching the pattern. By default all files are included so
# most times you only have to make patterns to black list certain files (for instance
# configuration files that are specific to your sandbox and that should not be
# transferred to a production server)
rsync --progress -azC --force --delete --exclude-from=$exclude_file -e "ssh -ax -i $private_key" $local_path $user_name
Wednesday, June 24, 2009
don't forget to register your rhel!
apt-get install tomcat5
Reading Package Lists... Done
Building Dependency Tree... Done
E: Couldn't find package tomcat5
YUM kept saying:
yum install tomcat5 tomcat5-admin-webapps tomcat5-webapps
Loaded plugins: rhnplugin, security
This system is not registered with RHN.
RHN support will be disabled.
Setting up Install Process
Parsing package install arguments
No package tomcat5 available.
No package tomcat5-admin-webapps available.
No package tomcat5-webapps available.
Nothing to do
RHN support will be disabled... not all repos will be disabled. It even disabled third party repos like DAG!
Force me to register! Damn YOU!!! anyway this is all I had to do to register... surprisingly quick and painless...
/usr/sbin/rhnreg_ks --username=**************** --password=**********
I hate registering.
Thursday, June 11, 2009
$%@!#& mysql root user has no permissions error 1044
Somehow my mysql root user lost all of it's permissions. I think it might be because I moved the database files over from another harddrive that I upgraded to a larger one at somepoint and then did a re-install of mysql and then just over wrote the data directory and started mysql backup without actually setting up the new mysql instance. All the version numbers and paths were the same so I figured it would be a go. Apparently not. Here is the error I kept getting no matter what I tried:
ERROR 1044 (42000): Access denied for user 'root'@'localhost' to database 'mysql'
I tried every database and kept getting
ERROR 1044 (42000): Access denied for user 'root'@'localhost' to database 'blah'
DAMN!
anyway i found this stupid trick in the mysql user forums (note the date of the post!):
Posted by [name withheld] on July 30 2003 11:58pm [Delete] [Edit]
when you are simply trying to:
C:\mysql\bin>mysql -uroot -p mysql
and you get:
ERROR 1044: Access denied for user: '@127.0.0.1' to database 'mysql'
Here is what I do. The key is to supply your real ip address for the -h (host) parameter. On windows, from the command prompt type 'ipconfig' to see your ip address. Once you have that, do the following:
C:\mysql\bin>mysql -h 192.168.0.1 -u root -p mysql
Enter password: ****************
// then I explicitly add root@127.0.0.1 to the user table, so after this I can log in as you would expect
GRANT ALL PRIVILEGES ON *.* TO root@localhost IDENTIFIED BY 'root-password' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO root@127.0.0.1 IDENTIFIED BY 'root-password' WITH GRANT OPTION;
Why would mysql care if you are connecting from an externaly available ip address on your own computer I don't know. All i do know is that it wouldn't let me grant the root user permissions via the loopback address or via sockets.
This one was a doozy!
Thursday, April 9, 2009
Creating Tap interface on Windows to allow for communication with Virtualbox guest os
I have torn the following step by step instructions from here (http://milksnot.com/joomla/index.php?option=com_content&view=article&id=29&Itemid=25)
Download and install OpenVPN for Windows.
After installation of OpenVPN, a so-called TAP interface should already be installed.
Now rename the existing TAP interface to 'OpenVPN'. We won't otherwise touch this adapter, because we might want to use it with OpenVPN. If you won't be using OpenVPN, then you can skip the part below were you install a second TAP adapter. You do however need to do the configuration bit.
You can add a TAP interface in two ways: Either use the installed script at "start/program files/openvpn/Add a new TAP-Win32 virtual ethernet adapter" or do it manually. If you prefer the latter, then you do not even have to install OpenVPN. You can extract the driver from the installation package and use only that. If you prefer a manual install, this is how you would go about it:
Open Control Panel and select Add Hardware
Select 'Yes, I have already connected the hardware'
Select 'Add a new hardware device'
Select 'Manually select'
Select 'Network adapters'
Select 'Have disk'
Browse to 'C:\Program Files\OpenVPN\driver' and select 'OemWin2k.inf'
Select 'TAP-Win32 Adapter'
Some messages may appear about driver signing. Ignore them.
You do not need to reboot in order to use the new interfaces. Removing a device can be done in Computer Management/Device Manager/Right-click-on-device/uninstall. Now configure the new TAP interface.
Open the Network Connections window and look for the new adapter. It will be called something like 'Local Area Connection'.
Rename the adapter to 'TAP'
Open TAP's properties and browse to General/Adapter/Advanced
Set the adapter's Media Status to 'Always Connected'. If we skip this, then the host machine won't be active on the TAP's network.
Now configure the IP address and mask of the TAP adapter. NOTE: Use a range not in use by any of your other adapters. I spent bloody two hours trying to discover why my networks were not networking only to discover I forgot to disable two VMware network interfaces which were using the same range as I was using with the TAP interfaces.
Using the adapter in VirtualBox: When configuring an interface on a virtual machine, select 'Attached to: Host Interface' and the select the adapter called 'TAP-Win32 adapter V8 #2' from the list of adapters. And from here on it's business as usual.
Creating Host Only network adapter with virtual box on OS X
Start by downloading the Tun/Tap software from the sourceforge page here: http://tuntaposx.sourceforge.net/
Next add the following to a bash script and give it execute permissions (Taken from this post on the vbox forrums: http://forums.virtualbox.org/viewtopic.php?f=8&t=14871&p=66322#p66322)
echo "starting"
exec 4<>/dev/tap0
ifconfig tap0 10.10.10.1 10.10.10.255
ifconfig tap0 up
ping -c1 10.10.10.1
echo "ending"
export PS1="tap interface>"
dd of=/dev/null <&4 & # continuously reads from buffer and dumps to null
sudo run the script.
Next modify your vbox guest os via the command line like so:
VBoxManage modifyvm "MyVM" -nic2 hostif #make the second network adapter host-networking
VBoxManage modifyvm "MyVM" -hostifdev2 tap0: # connect that adapter to tap0: (make sure to include the : colon after the tap0)
Start the guest machine.
Configure the guest machine (depending on your distro) to activate the eth1 device with a static ip address in the 10.10.10.255 range with the netmask of 255.255.255.0
Now from the hostos try and ping the guest os on the ip address you gave eth1 above.
If all goes well it should respond to the ping (firewall permitting of course)
If you feel like it go ahead and modify the /etc/hosts file on your host os to point a human readable name like "virtualbox" at the ip address you configured for eth1 on the guest os.
Thursday, March 26, 2009
Wednesday, March 25, 2009
thoughts on an agile-waterfall mixed process
requirements gathering / exploration
requirements document
client priority analysis
functional specifications
effort analysis of line items in functional spec
prototype risky functionality
mood boards for design
content and functionality document (with summarized effort based on knowledge from effort analysis of functional spec and client priority analysis)
(at this point we should know what the client wants for the initial launch of the product based on effort and priority. Things that are too costly and low priority should be put off until post launch)
wireframes
user testing of wireframes (not required)
annotated wireframes with func spec / inform wireframes from func spec and vice versa
design round based on wireframes
user testing of design (not required)
scaffold functionality based on wireframes (this task might be broken into a dozen or more scrum sprints depending on the size of the project)
content entry
user testing of scaffolding (not required)
design round
further user testing of design (not required)
sign off of design
build out full functionality to work with design (again this task might be broken into a dozen or more scrum sprints depending on the size of the project)
content entry
QA
Bug fixes
product launch
post launch feature requirements review and exploration (begin process over again)
Tuesday, March 24, 2009
more bash wildcard craziness
Now, what happens if you specify a pattern that doesn't match any file system objects? In the following example, we try to list all the files in /usr/bin that begin with asdf and end with jkl, including potentially the file asdfjkl:
Code Listing 5.5: Another example of the * glob |
$ ls -d /usr/bin/asdf*jkl ls: /usr/bin/asdf*jkl: No such file or directory |
Here's what happened. Normally, when we specify a pattern, that pattern matches one or more files on the underlying file system, and bash replaces the pattern with a space-separated list of all matching objects. However, when the pattern doesn't produce any matches, bash leaves the argument, wild cards and all, as-is. So, then ls can't find the file /usr/bin/asdf*jkl and it gives us an error. The operative rule here is that glob patterns are expanded only if they match objects in the file system. Otherwise they remain as is and are passed literally to the program you're calling.
Wild card syntax: []
This wild card is like a ?, but it allows more specificity. To use this wild card, place any characters you'd like to match inside the []. The resultant expression will match a single occurrence of any of these characters. You can also use - to specify a range, and even combine ranges. Examples:
myfile[12] will match myfile1 and myfile2. The wild card will be expanded as long as at least one of these files exists in the current directory.
[Cc]hange[Ll]og will match Changelog, ChangeLog, changeLog, and changelog. As you can see, using bracket wild cards can be useful for matching variations in capitalization.
ls /etc/[0-9]* will list all files in /etc that begin with a number.
ls /tmp/[A-Za-z]* will list all files in /tmp that begin with an upper or lower-case letter.
The [!] construct is similar to the [] construct, except rather than matching any characters inside the brackets, it'll match any character, as long as it is not listed between the [! and ]. Example:
rm myfile[!9] will remove all files named myfile plus a single character, except for myfile9
? matches any single character. Examples:
- myfile? matches any file whose name consists of myfile followed by a single character
- /tmp/notes?txt would match both /tmp/notes.txt and /tmp/notes_txt, if they exist
More fun linux tidbits
To solve this problem, you can take advantage of Linux' built-in wild card support. This support, also called "globbing" (for historical reasons), allows you to specify multiple files at once by using a wildcat pattern. Bash and other Linux commands will interpret this pattern by looking on disk and finding any files that match it. So, if you had files file1 through file8 in the current working directory, you could remove these files by typing:
Code Listing 5.2: Removing files using shell completion |
$ rm file[1-8] |
saving your rm -rf ass
Code Listing 4.12: Setting the 'rm -i' alias |
alias rm="rm -i" |
sending email from php on leopard server 10.5.6
First you must make sure that you have apache and php installed correctly. you can verify this by creating a phpinfo.php file with contents and then viewing it at your servers ip address (or domain name) in your client computers web browser. If the phpinfo.php renders correctly then you can move on to the next steps: (taken from: http://jspr.tndy.me/2008/05/php-mail-and-osx-leopard/)
There are 4 files I used for the following:
- /etc/hostconfig
- /etc/postfix/main.cf
- php.ini (this could be anywhere depending on your installation, mine’s in /usr/local/php5/lib/)
- /var/log/mail.log
firstly, sudo nano -w /etc/hostconfig and add the following line:
MAILSERVER=-YES-
then sudo nano -w /etc/postfix/main.cf, find the myhostname variable (by default it’s host.domain.tld), uncomment it and change it to your domain (if you’re on a machine that doesn’t have a DNS, you can make it a domain that you’re responsible for so that it doesn’t get shut down at the receiving end, but please don’t make it google.com or something like that!)
now, open php.ini and look for the sendmail_path variable, uncomment it, make its value sendmail -t -i, save then restart apache. I’m not really sure if this is 100% necessary as there’s a comment above that says this is the default value anyway, but it can’t hurt!
now open a terminal window and execute the next couple of commands:
My machine already had postfix running for some reason. That might have been because I had been playing with sendmail for the hour or so before I found the tutorial above. For that reason I had to "restart sendmail" in order to get it to read in the new main.cf configuration. So I did the following:
sudo postfix reload
however that made sendmail report the following error:
postfix/postfix-script: warning: not set-gid or not owner+group+world executable: /usr/sbin/postdrop
SO after a quick google search I found that this can be fixed by performing the following command:
chmod g+s /usr/sbin/postdrop
Then you can:
sudo postfix stop
sudo postfix start
and then for good measure restart apache:
sudo apachectl restart
Finally double check its all working by finishing the tutorial:
% sudo postfix start
% tail -f /var/log/mail.log
finally, create a file called mail.php (or whatever!) and add the following to it:
mail(obviously replace you@yourdomain.com with your email address and me@mydomain.com with a valid email address (domain at least, as some mail servers will bounce your email if the sender’s domain isn’t real). Now navigate to your mail.php file (likely http://localhost/mail.php) and watch your terminal window to see that it’s been sent successfully.
'you@yourdomain.com', // your email address
'Test', // email subject
'This is an email', // email body
"From: Mern" // additional headers
);
?>
Thursday, March 5, 2009
Using Yum (common yum tasks)
yum update
yum search any-package
yum search httpd
Consultation of information. To consult the information contained in a package in individual:
yum info any-package
yum info httpd
yum install any-package
yum install gkrellm
yum remove any-package
yum remove gkrellm
available yum list|less
yum list installed|less
yum list updates|less
Yum leaves as result of its use heads and packages RPM stored in the interior of the directory located in the route /var/cache/yum/. Particularly the packages RPM that have settled can occupy much space and is by such reason agrees to eliminate them once no longer they have utility. Also it agrees to do the same with the old heads of packages that no longer are in the data base. In order to make the corresponding cleaning, the following thing can be executed:
yum clean all
yum groupinstall "groupname"
I hope this will help you understand how to use yum more effeciently. I did this for our newbies that may want to uninstall packages which is not mention in the fedora FAQ. For more info on yum go here: http://www.fedorafaq.org/#installsoftware
MORE:
One tip, you can use also joker-signs as * or ? e.g.
yum install gkrellm*
And to install/remove you have to be root! Not for searching.
To search in package names only, use yum list. This differs from search in that it's much faster, as it will search package names only, while yum search will search all the package info, including package description.
yum list something
yum list mozilla
yum provides filename
yum provides /usr/bin/mozilla
To get a list of packages updated/added to any of your repositories recently:
yum list recent
yum --enablerepo=reponame install packagename
yum --enablerepo=dag install j2re
yum grouplist
yum groupinstall "groupname"
yum groupinstall "GNOME Desktop Environment"
yum groupupdate "GNOME Desktop Environment"
And remember folks, you can always use -y to say yes to everything, and -C to use the cache only.
get the latest version of postgres (8.3) on your centos box
your box.
http://it.toolbox.com/blogs/web2-place/if-your-yum-is-not-fetching-latest-po
stgres-25582
Sunday, February 22, 2009
handy clipboard utiltiy in os x
moving forward and backward a "word" in terminal
http://blog.macromates.com/2006/word-movement-in-terminal/
here are the commands (I edited them to match leopards setup)
- Open Terminal
- Open the Inspector (command-i) (inspector in leopard does not have the keyboad section so instead just open regular preferences)
- Go to the “Keyboard” section
- Add a new key binding by pressing the “Add” button
- Set “Key:” to “cursor left”
- Set “Modifier:” to “option”
- Set “Action” to “send string to shell:”
- In the text box, press the escape key to get the “\033″ text, then hit the “b” key, for “back” (In leopard it automatically escapes your \ key so you will need to copy and paste \033b into the text box rather than type it)
- Click “OK”
- Repeat this process for forward movement, using “cursor right” for the “Key:” setting, and “escape-f” for the forward key binding (Again you will need to copy and paste \033f in the text window due to the fact that leopard terminal automatically escapes your text if you type it into the box)
- Be sure to click “Use Settings as Defaults” if you want the change to be permanent (No need to do this in leopard since you should be in the preferences window anyway at this point and your settings will be default)
- Open a new terminal window scroll back in your buffer or type a multi word command and try it out by holding option and clicking left and right!
Wednesday, February 11, 2009
Symlinks in windows: junction
http://technet.microsoft.com/en-us/sysinternals/bb896768.aspx
Thursday, February 5, 2009
Tuesday, February 3, 2009
Friday, January 30, 2009
Turn on/off startup of services with chkconfig
I especially hate webmins
chkconfig --level 4 webmin off
a brief understanding of runlevels: (from: http://www.yolinux.com/TUTORIALS/LinuxTutorialInitProcess.html)
Runlevel "3" will boot to text or console mode and "5" will boot to the graphical login mode ( "4" for slackware)
-
Runlevel Scripts Directory
(Red Hat/Fedora Core)State 0 /etc/rc.d/rc0.d/ shutdown/halt system 1 /etc/rc.d/rc1.d/ Single user mode 2 /etc/rc.d/rc2.d/ Multiuser with no network services exported 3 /etc/rc.d/rc3.d/ Default text/console only start. Full multiuser 4 /etc/rc.d/rc4.d/ Reserved for local use. Also X-windows (Slackware/BSD) 5 /etc/rc.d/rc5.d/ XDM X-windows GUI mode (Redhat/System V) 6 /etc/rc.d/rc6.d/ Reboot s or S
Single user/Maintenance mode (Slackware) M
Multiuser mode (Slackware)
One may switch init levels by issuing the init command with the appropriate runlevel. Use the command "init #" where # is one of s,S,0,1,3,5,6. The command telinit does the same.
The scripts for a given run level are run during boot and shutdown. The scripts are found in the directory /etc/rc.d/rc#.d/ where the symbol # represents the run level. i.e. the run level "3" will run all the scripts in the directory /etc/rc.d/rc3.d/ which start with the letter "S" during system boot. This starts the background processes required by the system. During shutdown all scripts in the directory which begin with the letter "K" will be executed. This system provides an orderly way to bring the system to different states for production and maintenance modes.
If you installed all demons (background processes), Linux will run them all. To avoid slowing down your machine, remove unneeded services from the start-up procedure. You can start/stop individual demons by changing to the directory:
- /etc/rc.d/init.d/ (Red Hat/Fedora )
- /etc/init.d/ (S.u.s.e.)
- /etc/init.d/ (Ubuntu / Debian)
- cd /etc/rc.d/init.d/
(or /etc/init.d/ for S.u.s.e. and Ubuntu / Debian) - httpd stop
Use the command ps -aux to view all process on your machine.
TIP: List state and run level of all services which can be started by init: chkconfig --list
or
service --status-all | grep running (Red Hat/Fedora Core based systems)
Monday, January 26, 2009
alias mysql start and stop in bash for quick and easy startup
drop this the following in ~/.profile
alias mysqlstart='sudo mysqld_safe5 &'
alias mysqlstop='mysqladmin5 -u root -p shutdown'
then type
>source ~/.profile
>sudo -v
>password:
>mysqlstart