William Jiang

JavaScript,PHP,Node,Perl,LAMP Web Developer – http://williamjxj.com; https://github.com/williamjxj?tab=repositories

Category Archives: Database

CouchDB vs. Redis

CouchDB vs. Redis

I installed CouchDB, ZeroMQ in my MacBook Air, with the usages of XCode 5, brew.
The following are some helpful articles:

CouchDB,反Redis的产物?-Redis与CouchDB特性对比
Is CouchDB The Anti-Redis?
Cassandra、MongoDB、CouchDB、Redis、Riak、HBase比较

CouchDB 的长处正是Redis的短处:存储大量的不易变但会被经常查询的数据。Redis的长处正是CouchDB的短处:存储小量的常变数据。

IndexedDB -> CouchDB
localStorage -> Redis
Web Workers -> node promises/delay

MySQL: update column by using regexp

1. MySQL, update column by using regexp

I want to update the values of a MySQL column, e.g. from:
http://www.example.com/news/ent//news/2002/09/02/ent-12.html?page=1223
to:
http://www.example.com/news/2002/09/02/ent-12.html?page=1223
For a huge table, if using:

  1. $ mysqldump -u user -p database table > dump.sql
  2. $ find /path/to/dump.sql -type f -exec sed -i ‘s/old_string/new_string/g’ {} \;
  3. $ mysqlimport -u user -p database table < dump.sql

It would make processing complex and easy to occur errors. My solution is very simple: just using MySQL’s substring(), concat(), they works very well: it takes around 10 seconds to process tens of thousands data.

update contents 
 set url=concat('http://www.example.com', substring(url,39)) 
  where createdby='gossip.cgi'

2. MySQL merge 2 tables

I have 2 MySql tables which have the same structure/indexes/unique index, each have tens of thousands data, and want to merge table2 into table1. The following is my way:

insert ignore into table1
  select * from table2;

I remember Oracle PL/SQL has the same syntax with MySQL for this kind of merge.

MySQL: a column with a unique index as well as a normal index

MySQL: a column with a unique index as well as a normal index

In MySQL, can a table field owns different indexes? Yes, in some case, we can do that to increase the performance.

I have a table ‘keywords’, and want to add 2 different indexes for keyword field in the table: 1 is for its unique (no duplicated keyword), 2 is for speeding up (in the case of input-field auto-complete). Like this:

alter table `keywords` add unique index (`keyword`);
ALTER TABLE  `keywords` ADD INDEX (`keyword`);

This table is used for search-form auto-complete. Originally it is only with unique index, and is slow when user inputs keyword. After adding the index on `keyword` for speed-up purpose, which runs ‘select keyword from keywords where keyword like ‘%’._GET[‘k’].’%‘, I checked the result and it becomes much faster indeed. So adding a normal index besides unique index works. The conclusion is With or without the normal index is quite different.

It is somewhat confused for hiring 2 indexes on the same field. However, it works in the real-world. Without the second normal index on `keyword`, it is pretty slow.

Having several indexes on one field are very useful in some case, essentially, they do index different things.

Install MongoDB with Windows XAMPP

Install MongoDB with Windows XAMPP

I found a useful article How to install MongoDB to use with XAMPP on Windows when I struggled with the correct version of php_mongo.dll.

Step1
Installing MongoDB

  1. Download MongoDB from http://www.mongodb.org/downloads
  2. Extract the archive
  3. Open Command Prompt
  4. Goto mongodb\bin folder
  5. Type to create DB repository and start daemon:
mkdir C:\mongodb\db
mongod.exe --dbpath=C:\mongodb\db
Step2

Check your Php version by going to http://localhost/xampp/phpinfo.php
note down the following from the below image
first the php version(here 5.3) then the php extension build (here ts,vc6)

  1. Download MongoDB driver from github.com/mongodb/mongo-php-driver/downloads
  2. Extract driver and copy the dll that matches your php version installed on your xampp (here mongo-1.1.4-php5.3vc6ts) and put to php/ext folder
  3. Open php.ini file
  4. Append this extension=php_mongo.dll, save php.ini file, and then restart your web server.
if every thing went fine then 
  1. Point your browser at localhost and select your language, then phpinfo() from the left menu
  2. Make sure there’s a Mongo section
The core is the ‘PHP Extension Build: API20090626,TS,VC6‘ option in localhost/xampp/phpinfo.php, the ‘TS,VC6‘ here is important for download correct php_mongo.dll version, others corrupt.

Step 3

Try out this example from here http://www.php.net/manual/en/mongo.tutorial.php to test mongodb is working

MySQL: ERROR 1062 (23000): Duplicate entry ” for key 1

MySQL: ERROR 1062 (23000): Duplicate entry ” for key 1

I got the above error in bash script when loading MySQL data, and find the solution to avoid this issue:

If you get “Skipped records” using “LOAD DATA LOCAL INFILE” copy the data file to the actual database server and do the load without the “LOCAL” keyword.
This will then stop when an error occurs, 9 times out of 10 it will be index issues and you will know why there are skipped records.

e.g. LOAD DATA LOCAL INFILE ‘myinfile.txt’;
Query OK, 288168 rows affected (1 min 44.49 sec)
Records: 494522 Deleted: 0 Skipped: 206354 Warnings: 0

LOAD DATA INFILE ‘/data/input/myinfile.txt’;
Query OK, 252243 rows affected (0.02 sec)
ERROR 1062 (23000): Duplicate entry ‘5935009001-2008-08-03 04:19:18’ for key 1

So always use ‘LOCAL‘ keyword when LOAD DATA INFILE.

PHP and MySQL tips

print array or scalar?

My version of print array or scalar on the screen like this:

function prints($vars) {
 global $config;
 if (!isset($config['debug']) || (! $config['debug']) ) return;
 if(is_array($vars) || is_object($vars)) {
  echo "<pre>"; print_r($vars); echo "</pre>";
 }
 else echo $vars."<br>\n";
}

Dynamic load Class
class_exists — Checks if the class has been defined

if (!class_exists("someClass", false)) {
 require_once "someClass.php";
}
or:
if (class_exists('MyClass')) {
    $myclass = new MyClass();
}

MySQL: Date functions

If you’re looking for an SQL query that returns the number of days, hours and minutes between date1 and now:

SELECT CONCAT(DAYOFYEAR(date1)-DAYOFYEAR(NOW()),' days ',
DATE_FORMAT(ADDTIME("2000-00-00 00:00:00",
SEC_TO_TIME(TIME_TO_SEC(date1)-TIME_TO_SEC(NOW()))),'%k hours and %i minutes'))
 AS time FROM time_table;
+---------------------------------+
| time                            |
+---------------------------------+
| 27 days 2 hours and 52 minutes  |
+---------------------------------+

The following is from http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_last-day.

(1) I’m using this query for a birthday-reminder:

SELECT `geb_Geboorte`
FROM `gebruikers`
WHERE
DAYOFYEAR( curdate()) <= dayofyear( `geb_Geboorte` )
AND
DAYOFYEAR( curdate())+15 >= dayofyear( `geb_Geboorte` );

(2) Just another example on how to figure out how many days are until some birthdate (in order to do a range query, or get the “next” birthday):

SELECT name, birthday,
IF(DAYOFYEAR(birthday) >= DAYOFYEAR(NOW()),
DAYOFYEAR(birthday) - DAYOFYEAR(NOW()),
DAYOFYEAR(birthday) - DAYOFYEAR(NOW()) +
DAYOFYEAR(CONCAT(YEAR(NOW()),'-12-31')))
AS distance
FROM birthdates;

The + DAYOFYEAR(CONCAT(YEAR(NOW()),’-12-31′)) (which is 366 or 365, depending on whether we’re in a leap year or not) takes care of the New Year’s Eve wrap around.

(3) This is another query for the birthday remainder :

SELECT * FROM `users`
WHERE 
( 
  DAYOFYEAR( NOW() ) > DAYOFYEAR( DATE_SUB(birthdate,INTERVAL 7 DAY) ) 
 AND
  DAYOFYEAR( NOW() ) <= DAYOFYEAR( DATE_SUB(birthdate,INTERVAL 7 DAY) )+7 
)
OR
( 
  DAYOFYEAR( NOW() ) > DAYOFYEAR( birthdate )-7 
 AND 
  DAYOFYEAR( NOW() ) <= DAYOFYEAR( birthdate ) 
);

3 tips: mysqldump -routines, unique php $_POST keys, mysql group by

1. mysqldump –routines

While I used mysqldump to backup Database, MySQL Stored Procedures and Functions are lost by default. I spent much time to debug since the webpages couldn’t run normally without their supports.
The following article from web is very helpful which fix the problem by adding them when backup.

Dumping MySQL Stored Procedures, Functions and Triggers

By default mysqldump backups all the triggers but NOT the stored procedures/functions. There are 2 mysqldump parameters that control this behavior:

  • –routines – FALSE by default
  • –triggers – TRUE by default

This means that if you want to include in an existing backup script also the triggers and stored procedures you only need to add the –routines command line parameter:

mysqldump <other mysqldump options> --routines > outputfile.sql

Let’s assume we want to backup ONLY the stored procedures and triggers and not the mysql tables and data (this can be useful to import these in another db/server that has already the data but not the stored procedures and/or triggers), then we should run something like:

mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt <database> > outputfile.sql

and this will save only the procedures/functions/triggers. If you need to import them to another db/server you will have to run something like:

mysql <database> < outputfile.sql

2. unique $_POSE keys

Sometimes when we post form, the post keys might be duplicated, such as re-enter passwords form. Before update/insert into DB, we need to unique them and extract useful info. The following function is suitable to apply for such case:

function array_unique_key($array) {
 $result = array();
 foreach (array_unique(array_keys($array)) as $tvalue) {
  $result[$tvalue] = $array[$tvalue];
 }
 return $result;
}

It does 3 things:

  • using array_keys() to get $_POST keys which is a array.
  • using array_unique() to unique the keys which removes the duplicates.
  • generate a new array which holds the unique keys and their values.

3.mysql: get unique max record with same ids

I want to extract the latest updated record from table, which may be updated many times for the same primary key. e.g., there are 6 records (under same id) updated during this week, I prefer to fetch the latest record instead of all. The following is the simpleset way:


SELECT data.* FROM data INNER JOIN (SELECT MAX(id) AS id FROM data group by url) ids ON data.id = ids.id 

group by get the records grouped, and MAX() select the latest.

MySQL: 4 tips

(1) update 1 table with other table’s values:
Instead of using cursor, mysql has a convenient way to update a table (row by row) by other table’s value. The following sql is to replace table t’s passwd field with table h’s passwd field value.

update users_prod as t, (select * from users_org) as h set
t.passwd=h.passwd
where t.id=h.id
and t.passwd is null

This is an efficient way, using (select …) as alias to fetch data into memory, then loop the memory to do the update.

(2) load data with pre-process:
The following is a more powerful load table with pre-process function, to update columns while loading.

mysql> LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE t
IGNORE 1 LINES
(@date,@time,@name,@weight_lb,@state)
SET
 dt = CONCAT(@date,' ',@time),
 firstname = SUBSTRING_INDEX(@name,' ',1),
 lastname = SUBSTRING_INDEX(@name,' ',-1),
 weight_kg = @weight_lb * .454,
 st_abbrev = (select abbrev from states where name = @state);

(3) create view:
create view can save a lot of codes and time to optimism application. The following are 2 examples: 1 is from Paul DuBois‘s book, 1 from http://dev.mysql.com/doc/refman/5.0/en/create-view.html:

 CREATE VIEW view_mail AS
  SELECT
  DATE_FORMAT(t, '%M %e, %Y') AS date_sent,
  CONCAT(from, '@', from_host) AS sender,
  CONCAT(to, '@', to_host) AS recipient, size
  FROM mail
create function book_subject
returns varchar(64) as
return @book_subject;
--
create view thematical_books as
select title, author
from books
where subject = book_subject();

MySQL: stored procedure and function

While writing MySQL functions, I found a helpful article from web and extracted it, put here for reference.

I always find it amusing that when people talk about database logic they are very focused on stored procedures and its almost like nothing else exists. Makes me wonder if these people have ever worked with modern databases. Stored procedures are one of the oldest methods of encapsulating database logic, but they are not the only method available. Many relational databases nowadays have views, constraints, referential integrity with cascading update, delete, stored functions, triggers and the like. These are extremely powerful tools when used appropriately.

In the next couple of sections we’ll cover stored procedures and these other kinds of database objects and detail the strengths and weaknesses of each for encapsulating logic. We will give a rating of 0-5 for each feature 0 meaning the feature is non-existent, 5 meaning this object is one of the best suited objects for implementing this kind of task.

Stored Procedures

Stored procedures are one of numerous mechanisms of encapsulating database logic in the database. They are similar to regular programming language procedures in that they take arguments, do something, and sometimes return results and sometimes even change the values of the arguments they take when arguments are declared as output parameters. You will find that they are very similar to stored functions in that they can return data; however stored procedures can not be used in queries. Since stored procedures have the mechanism of taking arguments declared as OUTPUT they can in theory return more than one output.

Feature Rating
Works in various kinds of databases 3 (many  databases such as DB II, Oracle, SQL Server, MySQL 5, PostGreSQL,
FireBird support them).  There are also a lot that don’t e.g.
MySQL < 5.0, MS Access (although parameterized queries serve a similar role)
Can be called by multiple applications and interfaces 4 (generally they can be called, but the use of OUTPUT arguments  is not
always usable)
Can take an undefined number of arguments 2 (note most databases allow to define optional arguments, but this can
become very unwieldy to maintain if there are a lot  because you end up
duplicating logic even within the stored procedure so is generally avoided)
Reusability within the database 3 (you can not reuse them in views, rarely in stored functions and other
stored procedures unless the stored procedure using it does not require a
return value or result query).  This varies slightly from DBMS to DBMS.
Can be used to change data in a table without giving rights to a user to
change table directly
4  In general true for most DBMSs
that support them.
Can return varying number of fields given different arguments. 3 –again in theory it can, but very hard to
maintain since you would often be duplicating logic to say return one field
in one situation and other set of fields in another situation or update a
field when the field is passed in as an argument. Note that in many databases such as for example SQL Server and Oracle, one can return multiple result sets with a stored procedure, but the receiving end needs to be able to do a next result set call and know the sequence in which the result sets are being sent.
Long stretches of SQL easy to read and maintain 5 (one of the great strengths of stored procedures is that you can have long transactions of sql statements and conditional loops which can be all committed at once or rolled back as a unit. This also saves on network traffic.

Stored Functions

Stored Functions are very similar to stored procedures except in 3 major
ways.

  1. Unlike
    stored procedures, they can be used in views, stored procedures, and other
    stored functions.
  2. In
    many databases they are prohibited from changing
    data or have ddl/dml limitations.  Note for databases such as PostGreSQL
    this is not true since the line between a stored function and a stored
    procedure is very greyed
  3. They
    generally can not take output arguments (placeholders) that are then passed back out with changed values.
Feature Rating
Works in various kinds of databases 3 (many  databases such as DB II, Oracle, SQL Server support them, MySQL 5, PostGreSQL).  There are also a lot that don’t e.g.
MySQL < 5.0, MS Access
Can be called by multiple applications and interfaces 4 (generally they can be called, but the use of OUTPUT arguments  is not
always usable)
Can take an undefined number of arguments 2 (note most databases allow to define optional arguments, but this can
become very unwieldy to maintain if there are a lot  because you end up
duplicating logic even within the stored function so is generally avoided)
Reusability within the database 5 (you can reuse them in views, in other stored functions and
stored procedures).  This varies slightly from DBMS to DBMS.
Can be used to change data in a table without giving rights to a user to
change table directly
3  Many databases do not allow changing of data in stored functions except temp table data, but those that do in general support this.
Can return varying number of fields given different arguments. 4 –For databases such as SQL Server, PostgreSQL, DB 2, Oracle that allow return tables and sets, you can selectively pick fields you want from within a query. So although the function always outputs the same number of fields, you can selectively use only some similar to what you can do with views. This is not true for scalar functions (MySQL 5.1- only supports scalar functions).
Long stretches of SQL easy to read 5 – yes – you can do fairly intensive multi-line processing which in the end returns one value or table to the user.

Views

Views are one of the greatest things invented since sliced bread.
The main beauty of a view is that it can be used like a table in most situations, but unlike a table, it can encapsulate very complex calculations and commonly used joins. It can also use pretty much any object in the db except for stored procedures. Views are most useful when you always need to join the same set of tables say an Order with an Order Detail to get summary calculation fields etc.

Feature Rating
Works in various kinds of databases 4 (many  databases such as DB II, Oracle, SQL Server support them, MySQL 5, PostGreSQL, SQLite, MSAccess (calls it a query)).  There are also some that don’t e.g.
MySQL < 5.0
Can be called by multiple applications and interfaces 5 (generally they can be called anywhere where you can call a table which is pretty much everywhere)
Can take an undefined number of arguments 5 (you can select subsets of columns and rows from a view just like you can from a table)
Reusability within the database 5 (you can reuse them in other views, in stored functions and
stored procedures).
Can be used to change data in a table without giving rights to a user to
change table directly.
3  In many databases Views are read-only and complex views are rarely updateable. Note that for example some databases such as Oracle,PostgreSQL, MS SQL Server , SQLite
will allow you to update even a complex view by using an instead of trigger or rule against the view. MySQL 5, MS SQL Server and some others automatically make one table views updateable. MS Access has a fairly sophisticated update algorithm for queries in that it automatically makes one table and even multi-table queries updateable and allows deletes if you define primary keys appropriately. Also depending on the field a column comes from it will update that and also automatically create child records in child tables if you try to update a child column when there is no record.
Can return varying number of fields given different arguments. 4 – (you can select subsets of columns and rows from a view just like you can from a table). However you can’t change the underlying structure e.g. return records from a different set of tables like you can with a stored procedure or function.
Long stretches of SQL easy to read. 3 (A view can often be defined with an administrative designer or using a color coded sql editor so is fairly easy to read) – in general though it is limited to only one select statement or unioned select statements.

Triggers And Rules

Triggers are objects generally tied to a table or view that run code based on certain events such as inserting data, before inserting data, updating/deleting data and before these events happen.

Triggers can be very great things and very dangerous things. Dangerous in the sense that they are tricky to debug, but powerful because no update to a table with a trigger can easily escape the trigger.

They are useful for making sure certain events always happen when data is inserted or updated – e.g. set complex default values of fields, inserting logging records into other tables.

Triggers are especially useful for one particular situation and that is for implementing instead of logic. For example as we said earlier, many views involving more than one table are not updateable. However in DBMS such as PostgreSQL, you can define a rule on a view that occurs when someone tries to update or insert into the view and will occur instead of the insert. The rule can be fairly complex and can layout how the tables should be updated in such a situation. MS SQL Server and SQLite let you do something similar with INSTEAD OF triggers.
Note the term Rule is a little confusing in DBMS because they mean quite different things. In Microsoft SQL Server for example a Rule is an obsolete construct that was used to define constraints on tables. In PostgreSQL a Rule is very similar to a trigger except that it does not get triggered per row event and is defined without need of a handling function.

Feature Rating
Works in various kinds of databases 2 (many  databases such as DB II, Oracle, SQL Server support them, MySQL 5, PostGreSQL,).  There are lots that don’t e.g.
MySQL < 5.0, MySQL 5 limited, MS Access
Can be called by multiple applications and interfaces 5 (it just happens behind the scenes. No application can escape them)
Can take an undefined number of arguments 0 ( strictly for updating data and against a table or view )
Reusability within the database 0 – No
Can be used to change data in a table without giving rights to a user to
change table directly.
4  In general yes for databases that support them
Can return varying number of fields given different arguments. 0 –Triggers are strictly for updating data
Long stretches of SQL easy to read. A trigger can often be defined with an administrative designer or using a color coded sql editor so is fairly easy to read 5

Foreign Key Constraints, Primary Key Constraints, Referential Integrity, Cascade Update/Delete

No true database should be without Constraints, Referential Integrity, and Cascade Update/Delete. You can define them with SQL DDL statements or using a relational designer. The use of these tools are limited but the purpose they serve can not be easily replicated with other database objects.
These tools serve 2 main purposes.

  • Provide a declarative model of the database and how the data is related to each other. A self-documenting tool.
  • Insure that you don’t mistakenly do something stupid in your coding to violate your model. If your model is sound, errors in your code signal a flaw in your programming logic. If you get errors in your programming logic, then verify that your model is sound.
Feature Rating
Works in various kinds of databases 3 (many  databases such as DB II, Oracle, SQL Server support them, MySQL 4+ (4 is limited, 3 very limited and varies depending on storage engine (InnoDB vs MyISAM)), PostGreSQL, MS Access).  There are lots that don’t e.g.
MySQL < 5.0
Can be called by multiple applications and interfaces 5 (it just happens behind the scenes. No application can escape them)
Can take an undefined number of arguments 0 (they take no arguments )
Reusability within the database 5 – Yes – no matter where your updates, inserts occur, they can not escape the iron-fist of the Key Constraints and Cascade Update/Delete rules next to disabling them.
Can be used to change data in a table without giving rights to a user to
change table directly.
4  Really only applies to cascade update/delete rules Yes – cascade update/delete rules are a special kind of trigger so to speak that kick in when a potential referential integrity constraint can be violated. Example if you update a key field, a cascade update on the foreign keys will force an update on the foreign key field to correct the situation so you don’t end up with orphan data.
Can return varying number of fields given different arguments. 0 –not relevant
Long stretches of SQL easy to read. A trigger can often be defined with an administrative designer or using a color coded sql editor so is fairly easy to read 0 – not relevant

Dynamically Generated SQL

In this section we discuss the pros and cons of using dynamically generated sql vs. only using stored procedures, view, etc.

Feature Rating
Works in various kinds of databases 4 (guaranteed to work for any database, although the syntax may vary from database.)
Can be called by multiple applications and interfaces 3 (Will only work in the application where the dynamic sql is defined or if the application is wrapped as a shared library. The use of shared library is often limited as to where you can use it. However your logic can be applied to multiple databases without needing to alter the db by adding stored procs etc. if you are always using the same app to access these databases.)
Can take an undefined number of arguments 5 (you can select subsets of columns and rows from tables, change tables you are reading from etc.)
Reusability within the database 2 (non-existent except in some very rare situations and databases that allow you to load shared libraries).
Can be used to change data in a table without giving rights to a user to
change table directly.
4  You can have the application control the security so in that sense the application only needs to have rights
Can return varying number of fields given different arguments. 5 – (you can select subsets of columns and rows, do complex joins ). You can change the underlying structure e.g. return records from a different set of tables.
Long stretches of SQL easy to read. 2 – SQL often sits interspersed with other programming logic which makes it somewhat difficult to read depending on the application language and also difficult to write complex transactional sql.

What about Security?

In the above criteria, we didn’t put Security in as a feature/benefit even though many people will argue that dynamic SQL and the like are not secure.
This actually varies depending on how you implement Dynamic SQL, e.g. if you are using an O/R Mapper or built-in dataadapters (supported in .NET),prepared statements, or are very careful to sanitize inputs you are using in your sql then
your Dynamic SQL will probably be just as secure or sometimes more so than using stored procedures since some people are of the mindset that stored procedures are a magic bullet for protecting against SQL Injection and other attacks (such as script injection) regardless of how they are implemented and called and thus neglect other necessary precautions.

Bash: MySQL backup enhanced version

I got ‘Bill Hernandez’s script for MySQL Database Backup by using bash+mysqldump. It is a enhanced version: backups all the tables in a Database in a single command, and is suitable to put into crontab.

I changed the original scripts, .e.g., remove sudo, chmod previleges to make it easier running. It works fine in Linux environment. The following is the updated script:

#!/bin/bash
# http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track
# ( 2 ) Now maintains a single log that contains additional information
# ( 3 ) Includes a file comment header inside each compressed file
# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else
#
# Posted by Ryan Haynes on July 11 2007 6:29pm

# DO NOT DELETE AUTOMATICALLY FOR NOW, MAYBE LATER

DELETE_EXPIRED_AUTOMATICALLY="TRUE"
expire_minutes=$(( 60 * 24 * 7 )) # 7 days old

if [ $expire_minutes -gt 1440 ]; then
    expire_days=$(( $expire_minutes /1440 ))
else
    expire_days=0
fi

function pause(){
 read -p "$*"
}

mysql_username="test"
mysql_password="test"
current_dir=`pwd`
echo -n "Current working directory is : "
echo $current_dir
echo "--------"

TIME_1=`date +%s`
TS=$(date +%Y.%m.%d\-%I.%M.%p)

BASE_DIR=./mysql
BACKUP_DIR=${BASE_DIR}/$TS
BACKUP_LOG_NAME=mysql_dump_runtime.log
BACKUP_LOG=${BASE_DIR}/${BACKUP_LOG_NAME}

mkdir -p $BACKUP_DIR
chown demo:demo $BACKUP_DIR
chmod 755 $BASE_DIR
chmod -R 755 $BACKUP_DIR

cd $BACKUP_DIR
echo -n "Changed working directory to : "
pwd

echo "Saving the following backups..."
echo "-------"

DBS="$(mysql --user=${mysql_username} --password=${mysql_password} -Bse 'show databases')"
for db in ${DBS[@]}
do
    normal_output_filename=${db}.sql
    compressed_output_filename=${normal_output_filename}.bz2
    echo $compressed_output_filename
    
    echo "-- $compressed_output_filename - $TS" > $normal_output_filename
    echo "-- Logname : `logname`" >> $normal_output_filename
    # mysqldump --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R | bzip2 -c > $compressed_output_filename
    mysqldump --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R >> $normal_output_filename
    bzip2 -c $normal_output_filename > $compressed_output_filename
    rm $normal_output_filename
done
echo "------"

TIME_2=`date +%s`

elapsed_seconds=$(( ( $TIME_2 - $TIME_1 ) ))
elapsed_minutes=$(( ( $TIME_2 - $TIME_1 ) / 60 ))

cd $BASE_DIR
echo -n "Changed working directory to : "
pwd
echo "Making log entries..."

if [ ! -f $BACKUP_LOG ]; then
    echo "----------" > ${BACKUP_LOG_NAME}
    echo "THIS IS A LOG OF THE MYSQL DUMPS..." >> ${BACKUP_LOG_NAME}
    echo "DATE STARTED : [${TS}]" >> ${BACKUP_LOG_NAME}
    echo "----------" >> ${BACKUP_LOG_NAME}
    echo "[BACKUP DIRECTORY ] [ELAPSED TIME]" >> ${BACKUP_LOG_NAME}
    echo "----------" >> ${BACKUP_LOG_NAME}
fi
    echo "[${TS}] This mysql dump ran for a total of $elapsed_seconds seconds." >> ${BACKUP_LOG_NAME}
    echo "---------" >> ${BACKUP_LOG_NAME}

# delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fine
if [ $DELETE_EXPIRED_AUTOMATICALLY == "TRUE" ]; then
    counter=0
    for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
    do
        counter=$(( counter + 1 ))
        echo "[${TS}] [Expired Backup - Deleted] $del" >> ${BACKUP_LOG_NAME}
    done
    echo "--------"
    if [ $counter -lt 1 ]; then
        if [ $expire_days -gt 0 ]; then
            echo There were no backup directories that were more than ${expire_days} days old:
        else
            echo There were no backup directories that were more than ${expire_minutes} minutes old:
        fi
    else
        echo "----------" >> ${BACKUP_LOG_NAME}
        if [ $expire_days -gt 0 ]; then
            echo These directories are more than ${expire_days} days old and they are being removed:
        else
            echo These directories are more than ${expire_minutes} minutes old and they are being removed:
        fi
        echo "--------"
        echo "\${expire_minutes} = ${expire_minutes} minutes"
        counter=0
        for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
        do
        counter=$(( counter + 1 ))
           echo $del
           rm -R $del
        done
    fi
fi
echo "-------"
cd `echo $current_dir`
echo -n "Restored working directory to : "
pwd