William Jiang

JavaScript,PHP,Node,Perl,LAMP Web Developer – http://williamjxj.com; https://github.com/williamjxj?tab=repositories

Tag Archives: bash

A powerful .exrc file for vim

a powerful .exrc file for vim

The following .exrc file make my CentOS web development environment is powerful and pretty quick. A lot of shortcut ways to write codes. Pretty cool: vi ~/.exrc to see –>

set autoindent
set noautowrite
set shiftwidth=4
set tabstop=4
set wrapmargin=2

abbr _href <a href=""></a>
abbr _img <img src="" border="0" width="" height="" alt="[image]" align="top">

ab get_jquery <script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script>
ab get_bcss <link href="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/css/bootstrap-combined.min.css" rel="stylesheet">
ab get_bjs  <script src="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js"></script>

map ,h5  o<!DOCTYPE html>
<head>
  <meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title></title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width">
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script>
<link href="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/css/bootstrap-combined.min.css" rel="stylesheet">
<script src="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js"></script>
 </head>

map ,b5 o  <body>
  <div class="container">
    <div class="row">
      <div class="span4"></div>
</div>
</div>
</body>
<script>jQuery(function($){});</script>
</html>ESC

abbr, ab, and map are truely shortcut setting.

Advertisements

bash’s find command examples list

bash’s find command examples list

The following I list useful and powerful examples of bash’s find command, which makes work much easier and quicker:

  1. Find files that are over a gigabyte in size:
    $ find ~/Movies -size +1024M
  2. Find files that are over 1 GB but less than 20 GB in size:
    $ find ~/Movies -size +1024M -size -20480M -print0
  3. Find files have been modified within the last day:
    $ find ~/Movies -mtime -1
  4. Find files have been modified within the last 30 minutes:
    $ find ~/Movies -mmin -30
  5. Find .doc files that also start with ‘questionnaire’ (AND)
    $ find . -name ‘*.doc’ -name questionnaire*
  6. List all files beginning with ‘memo’ and owned by Maude (AND)
    $ find . -name ‘memo*’ -user Maude
  7. Find .doc files that do NOT start with ‘Accounts’ (NOT)
    $ find . -name ‘*.doc’ ! -name Accounts*
  8. Find files named ‘secrets’ in or below the directory /tmp and delete them. Note that this will work incorrectly if there are any filenames containing newlines, single or double quotes, or spaces:
    $ find /tmp -name secrets -type f -print | xargs /bin/rm -f
  9. Find files named ‘secrets’ in or below the directory /tmp and delete them, processing filenames in such a way that file or directory names containing single or double quotes, spaces or newlines are correctly handled. The -name test comes before the -type test in order to avoid having to call stat(2) on every file.
    $ find /tmp -name secrets -type f -print0 | xargs -0 /bin/rm -f
  10. Run ‘myapp’ on every file in or below the current directory. Notice that the braces are enclosed in single quote marks to protect them from interpretation as shell script punctuation. The semicolon is similarly protected by the use of a backslash, though ‘;’ could have been used in that case also.
    $ find . -type f -exec myapp ‘{}’ \;
  11. Traverse the filesystem just once, listing setuid files and directories into /root/suid.txt and large files into /root/big.txt.
    $ find / \( -perm -4000 -fprintf /root/suid.txt ‘%#m %u %p\n’ \) , \
    \( -size +100M -fprintf /root/big.txt ‘%-10s %p\n’ \)
  12. Search for files in your home directory which have been modified in the last twenty-four hours. This command works this way because the time since each file was last modified is divided by 24 hours and any remainder is discarded. That means that to match -mtime 0, a file will have to have a modification in the past which is less than 24 hours ago.
    $ find $HOME -mtime 0
  13. Search for files which have read and write permission for their owner, and group, but which other users can read but not write to (664). Files which meet these criteria but have other permissions bits set (for example if someone can execute the file) will not be matched.
    $ find . -perm 664
  14. Search for files which have read and write permission for their owner and group, and which other users can read, without regard to the presence of any extra permission bits (for example the executable bit). This will match a file which has mode 0777, for example.
    $ find . -perm -664
  15. Search for files which are writable by somebody (their owner, or their group, or anybody else).
    $ find . -perm /222
  16. All three of these commands do the same thing, but the first one uses the octal representation of the file mode, and the other two use the symbolic form. These commands all search for files which are writable by either their owner or their group. The files don’t have to be writable by both the owner and group to be matched; either will do.
    $ find . -perm /220
    $ find . -perm /u+w,g+w
    $ find . -perm /u=w,g=w

Bash: auto launch some tools when reboot Unbuntu

Bash: auto launch some tools when reboot Unbuntu

In Unbuntu, you can always setup your tools / daemons to backend start/re-start/stop by default in /etc/rc.d/. That means for all login users, all these daemons are available.
However, sometimes we need some tools/daemons only available to some certain user: this user will start tools/daemon when he is login, so the tools/daemons are only used by him, not others. e.g., a tool like webstorm, or a testing DB like mongodb only used by login user ‘developer’.

In such case, we just need to setup user developer’s $HOME/.profile file (notice: not $HOME/.bashrc file). They are different: $HOME/.profile runs only one-time when the user login, while $HOME/.bashrc runs many times when he spawns terminitors. Here I gave an example to auto start webstrom GUI tool when user login.
in $HOME/.profile:

# 1. start webstorm
ps -ef | grep webstorm | grep -v grep >/dev/null 2>&1
if [ $? -ne 0 ]; then
  cd $HOME/WebStorm-117.501/bin/; ./webstorm.sh &
fi

It seems enough: when user logins, the script checks if webstorm GUI running or not, if not, start it; else do nothing.
By the way, you can alwasy add alias in $HOME/.bashrc: The following I provide a shortcut to start webstorm GUI manually by using ‘alias’
in $HOME/.bashrc:

alias webstorm='cd $HOME/apps/WebStorm-117.501/bin/; ./webstorm.sh &'

bash: pkill = `kill -9` ?

kill -9

In Linux env, we usually kill process by using ‘kill -9‘ or ‘pkill‘. For example, to kill a ‘sleep’ process, there are 2 ways:

//1. pkill
$ pkill sleep
//2. or kill -9 "sleep's pid"
kill -9 `ps -ef|grep sleep|grep -v grep|awk '{print $2}'`

Are they the same? Why ‘kill -9‘?
‘-9’ means using Linux signal ‘SIGKILL’. The following is a list of Linux signal names:

 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL
 5) SIGTRAP      6) SIGABRT      7) SIGBUS       8 ) SIGFPE
 9) SIGKILL     10) SIGUSR1     11) SIGSEGV     12) SIGUSR2
13) SIGPIPE     14) SIGALRM     15) SIGTERM     16) SIGSTKFLT
17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU
25) SIGXFSZ     26) SIGVTALRM   27) SIGPROF     28) SIGWINCH
29) SIGIO       30) SIGPWR      31) SIGSYS      34) SIGRTMIN
35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3  38) SIGRTMIN+4
39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12
47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10
55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7  58) SIGRTMAX-6
59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

The 9 is SIGKILL, means using SIGKILL to kill the current pid. According to ‘info SIGKILL‘ explaination:

 -- Macro: int SIGKILL
     The `SIGKILL' signal is used to cause immediate program
     termination.  It cannot be handled or ignored, and is therefore
     always fatal.  It is also not possible to block this signal.

     This signal is usually generated only by explicit request.  Since
     it cannot be handled, you should generate it only as a last
     resort, after first trying a less drastic method such as `C-c' or
     `SIGTERM'.  If a process does not respond to any other termination
     signals, sending it a `SIGKILL' signal will almost always cause it
     to go away.

     In fact, if `SIGKILL' fails to terminate a process, that by itself
     constitutes an operating system bug which you should report.

     The system will generate `SIGKILL' for a process itself under some
     unusual conditions where the program cannot possibly continue to
     run (even to run a signal handler).
   

pkill

pkill will send the specified signal (by default SIGTERM) to each process instead of listing them on stdout.
‘pkill sleep ‘ use SIGTERM to terminate the process.
While ‘kill -9 pid’ mean use SIGKILL to kill the process. They are different.

So by default: pkill = kill -15, not kill -9

However the following is equal:

 $ pkill -SIGKILL sleep
 $ kill -9 'sleep pid'

Bash: MySQL backup enhanced version

I got ‘Bill Hernandez’s script for MySQL Database Backup by using bash+mysqldump. It is a enhanced version: backups all the tables in a Database in a single command, and is suitable to put into crontab.

I changed the original scripts, .e.g., remove sudo, chmod previleges to make it easier running. It works fine in Linux environment. The following is the updated script:

#!/bin/bash
# http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track
# ( 2 ) Now maintains a single log that contains additional information
# ( 3 ) Includes a file comment header inside each compressed file
# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else
#
# Posted by Ryan Haynes on July 11 2007 6:29pm

# DO NOT DELETE AUTOMATICALLY FOR NOW, MAYBE LATER

DELETE_EXPIRED_AUTOMATICALLY="TRUE"
expire_minutes=$(( 60 * 24 * 7 )) # 7 days old

if [ $expire_minutes -gt 1440 ]; then
    expire_days=$(( $expire_minutes /1440 ))
else
    expire_days=0
fi

function pause(){
 read -p "$*"
}

mysql_username="test"
mysql_password="test"
current_dir=`pwd`
echo -n "Current working directory is : "
echo $current_dir
echo "--------"

TIME_1=`date +%s`
TS=$(date +%Y.%m.%d\-%I.%M.%p)

BASE_DIR=./mysql
BACKUP_DIR=${BASE_DIR}/$TS
BACKUP_LOG_NAME=mysql_dump_runtime.log
BACKUP_LOG=${BASE_DIR}/${BACKUP_LOG_NAME}

mkdir -p $BACKUP_DIR
chown demo:demo $BACKUP_DIR
chmod 755 $BASE_DIR
chmod -R 755 $BACKUP_DIR

cd $BACKUP_DIR
echo -n "Changed working directory to : "
pwd

echo "Saving the following backups..."
echo "-------"

DBS="$(mysql --user=${mysql_username} --password=${mysql_password} -Bse 'show databases')"
for db in ${DBS[@]}
do
    normal_output_filename=${db}.sql
    compressed_output_filename=${normal_output_filename}.bz2
    echo $compressed_output_filename
    
    echo "-- $compressed_output_filename - $TS" > $normal_output_filename
    echo "-- Logname : `logname`" >> $normal_output_filename
    # mysqldump --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R | bzip2 -c > $compressed_output_filename
    mysqldump --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R >> $normal_output_filename
    bzip2 -c $normal_output_filename > $compressed_output_filename
    rm $normal_output_filename
done
echo "------"

TIME_2=`date +%s`

elapsed_seconds=$(( ( $TIME_2 - $TIME_1 ) ))
elapsed_minutes=$(( ( $TIME_2 - $TIME_1 ) / 60 ))

cd $BASE_DIR
echo -n "Changed working directory to : "
pwd
echo "Making log entries..."

if [ ! -f $BACKUP_LOG ]; then
    echo "----------" > ${BACKUP_LOG_NAME}
    echo "THIS IS A LOG OF THE MYSQL DUMPS..." >> ${BACKUP_LOG_NAME}
    echo "DATE STARTED : [${TS}]" >> ${BACKUP_LOG_NAME}
    echo "----------" >> ${BACKUP_LOG_NAME}
    echo "[BACKUP DIRECTORY ] [ELAPSED TIME]" >> ${BACKUP_LOG_NAME}
    echo "----------" >> ${BACKUP_LOG_NAME}
fi
    echo "[${TS}] This mysql dump ran for a total of $elapsed_seconds seconds." >> ${BACKUP_LOG_NAME}
    echo "---------" >> ${BACKUP_LOG_NAME}

# delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fine
if [ $DELETE_EXPIRED_AUTOMATICALLY == "TRUE" ]; then
    counter=0
    for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
    do
        counter=$(( counter + 1 ))
        echo "[${TS}] [Expired Backup - Deleted] $del" >> ${BACKUP_LOG_NAME}
    done
    echo "--------"
    if [ $counter -lt 1 ]; then
        if [ $expire_days -gt 0 ]; then
            echo There were no backup directories that were more than ${expire_days} days old:
        else
            echo There were no backup directories that were more than ${expire_minutes} minutes old:
        fi
    else
        echo "----------" >> ${BACKUP_LOG_NAME}
        if [ $expire_days -gt 0 ]; then
            echo These directories are more than ${expire_days} days old and they are being removed:
        else
            echo These directories are more than ${expire_minutes} minutes old and they are being removed:
        fi
        echo "--------"
        echo "\${expire_minutes} = ${expire_minutes} minutes"
        counter=0
        for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
        do
        counter=$(( counter + 1 ))
           echo $del
           rm -R $del
        done
    fi
fi
echo "-------"
cd `echo $current_dir`
echo -n "Restored working directory to : "
pwd

Bash and MySQL: load .CSV files

Here is my snippet shell script example to auto load MySQL .CSV files. Automatically processing will save engergy and time. Using ‘Here Doc‘ is a good way for such case.

The script will loop a certain dir, to extract all .CSV files and insert them into DB. The DB tables are identified by file names.

#!/bin/bash
# set variables, such as USER, PASS, SRC etc ...
cd ${SRC}
for file in `ls  *.csv`
do
 if [[ "$file" =~ "re1"  ]]
 then
   TABLE='table1'
 elif [[ "$file" =~ "re2" ]]
 then
   TABLE='table2'
 elif [[ "$file" =~ "re3" ]]
 then 
   TABLE='table3'
 else
   TABLE='table4'
 fi
 echo "processing file [" $file "] to table [" $TABLE "] ...";
$MYSQL -u "${USER}" -p"${PASS}" -h localhost -D ${DB} <<EOF

load data infile '${SRC}/${file}'
 into table ${TABLE}
 fields terminated by ','
 enclosed by '"'
 escaped by '\\\'
 lines terminated by '\n';
 \q
EOF
done

2 things need to be noticed:

  1. Before load data file, make sure to assign the previlege to the user:
    grant file on *.* to test@localhost identified by ‘test’;

  2. FIELDS ‘ESCAPED BY’ must use ‘\\\’ instead of ‘\’ or ‘\\’ in Here DOC. Otherwise shell script will throw away error.
    refer to ‘http://dev.mysql.com/doc/refman/5.1/en/load-data.html‘ for more details about ‘FIELDS ESCAPED BY’.

Bash: MySQL Database backup

Here is a complete example of MySQL Database backup script (dump out & compress):

#!/bin/bash
# 1. if exists, quit
# 2. if not, do the backup
# 3. compress
# put into cron job, every week based.
DB="DATABASE"
USER="test"
PASS='test'
DIR='/home/test/DBs/'

DATE=`date '+%d%h%Y'`
FILE="${DIR}/${DB}${DATE}.sql"
ZFILE="${FILE}.gz"

if [ -f ${ZFILE} ]; then
 echo "$ZFILE already exists."
 exit;
fi

mysqldump -u ${USER} -p"${PASS}" -h localhost ${DB} > ${FILE}
gzip -f $FILE

Put this script into crontab will make the backup automatically. such as:

$ crontab -e
0 0 * * 0 $HOME/bin/this_script.sh

Simply sql routine by using shell script

Suppose we frequently do some sql queries, it could be pretty tied if every time we need to key-in, especially the queries are multi queries, long statements. To make it easier, we can use pma (PhpMyAdmin) gui, however, there is a better way to do so.

In Linux, we can use shell scripts to implement it. The following short scripts (put sqls in ‘here doc’) simplify tied routines:

Code: batch_sqls.sh
  1. #!/bin/bash
  2. MYSQL=”mysql -u user -ppassword -D database”
  3. if [ $# -ne 1 ]; then
  4. echo “What date of data do you want to check? like: 2010-11-16, or 2010-10-18.”
  5. exit;
  6. fi
  7. date1=$1
  8. $MYSQL <<- __EOT__
  9. select count(distinct email) as “$date1’s emails:” from table where date like ‘$date1%’;
  10. select count(distinct email) as “$date1’s emails not in @mysite:” from table where date like ‘$date1%’ and ( email !=” and email not like ‘%@mysite%’ );
  11. __EOT__

By using ‘batch_sql.sh’, we can do a lot of things like these:

  • $ batch_sqls.sh 2010-11-16
    display the results in standard output – screen
  • $ batch_sqls.sh 2010-11-16 >archieve_`date +’%Y-%m-%d’`.txt
    keep the results in today’s file.
  • $ batch_sqls.sh `date +’%Y-%m-%d’`| grep … | sed … | awk {} | tee somefile.
    further parse the results and output the extracted data to new file.
  • $ batch_sqls.sh `date +’%Y-%m-%d’`| /usr/bin/mail -s subject $mail_list
    send the results to email.
  • batch_sqls.sh `date +’%Y-%m-%d’` 1>archieve.`date +’%Y-%m-%d’`.txt 2>>&1
    put the batch_sql.sh in cron job, to let it automatically runing.

These cases are very common in Linux env, for database query, backup, logfiles processing, file operation, system admin, etc, using shell script for such things can relieve the routines.