William Jiang

JavaScript,PHP,Node,Perl,LAMP Web Developer – http://williamjxj.com; https://github.com/williamjxj?tab=repositories

Tag Archives: linux

Linux: add Users and others

Create User Account for web developing

For the web developing based on Linux, some workers share a user account. E.g, 5 developers use 1 user account and sharing the codes. It is NOT a good choice, the potential conflicts of overwriting, deleting, renaming, will make hard works invoid.

To solve this problem, the best way is to create individual user account for each team worker.
Each team worker has his space to code, test, maintain his sources codes. Plus version control system, like git or cvs, the developing env is solid and scalable.

The following are the steps to create 2 Linux user accounts who share a common group:

# groupadd -g 10005 web_group
# useradd -d /home/test1 -s /bin/bash -p test1 -g web_group  test1
# useradd -d /home/test2 -s /bin/bash -p test2 -g web_group  test2
# usermod -a -G apache,mysql,nobody test1
# usermod -a -G apache,mysql,nobody test2

The user ‘test1’, ‘test2’ will have their individual home directory, in the same group ‘web_group’. Meanwhile, we add 3 groups besides the ‘web_group’: ‘apache’ related to webserver, ‘mysql’ to database control, and ‘nobody’ which maybe the webserver running user.
Now the 2 users share the benefits of the ‘apache, mysql, nobody, web_group’ groups.

The following codes are regarding on grant, re-assgin previleges etc.

-- recursive change dirs to user/group read/write/excutable
-- and others without write permission.
$ find . -type d -exec chmod -R 775  {} \;

-- change directory owner.
$ suexec chown -R test1/web_group web_testing_dirs/

Where should web programs to save files?

It is NEVER a good idea to store information (upload files, temperarily web files) under web developer’s home directory. Because it always generates privilege problems, and unsafe, hard to maintain. In such case, use public shared directory instead.

Here list the potential directories to hold temporaily/middle files in server side:

  • /tmp
  • /usr/tmp/
  • $DocumentRoot/$somedir/

Where are the best place? I use /usr/tmp/, which can be access even by ‘nobody’ user (the dir’s previlege is 1777, default to allow every user share).
Some reference for Apache and MySQL

  • for mysql.sock, it is in /var/lib/mysql/
  • for apache’s php session, it is in /var/lib/php/session/

MySQL and Apache can be assigned dir when compiling (configure, make, make install) in Linux.

Alias for web application

To shortcut access local applications like:

We need to configure Apache’s httpd.conf:

$ vi httpd.conf

Alias /test1 "C:/Users/.../test1/"
Alias /test2 "C:/Users/.../test2/"

# First, we configure the "default" to be a very restrictive set of 
# features.  
<Directory />
    Options FollowSymLinks Indexes MultiViews  Includes ExecCGI
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

After the adding, use: httpd -t to run syntax check for config files.

jQuery: put html tag in first place

    
$('<span></span>').addClass('error-message').
  text(errorMessage).appendTo(target);

Always put html tag in the first place (with CLOSE tag), this will make it compatiable with IE browsers.

PHP: header() and exit()

    if (some_condition_meets) {
  header("Location: otherpage.php");
  exit; 
}
other php codes ....

exit() should immediately follows after header(). This is Very neccessary, it will fork an new process for header(), then exit. Otherwise, PHP continues runing even a new processs is forked for header().

Javascript Compactor and Decompactor

It is always a good idea to compact JavaScript codes in html, especially the JS file is big. My way to do so is:

  • edit a JS file, let’s say, sample.js
  • testing, debuging it in developing environment(step over error, breakpoint)
  • compact sample.js to sample.min.js
  • So there are 2 versions: sample.js (visualable), and sample.min.js(remove whitespace)
  • in production environment, replace sample.js to samle.min.js: use <script language=”javascript” type=”text/javascript” src=”sample.min.js”></script>
  • use decompactor(remove all whitespace) to decompact JS codes, editing, improving in developing env, the compact it to production env.

In production env, we have to always use the compact JS to improve performance.
Currently there are some free JS compactor/decompactor, the following are 2:
MeanFreePath JavaScript Code Compactor
JavaScript Code Decompressor

Advertisements

Linux: .bash_profile vs. .bash_rc

Here I summary 2 basic Linux files which initializes user command-line environment. They are .bash_profile and .bash_rc (Suppose default shell is /bin/bash).

.bash_profile and .bash_rc are hidden in each user’s $HOME/ directory. To check it:

$ cd $HOME
$ ls -la .bash_profile .bash_rc

with $HOME/.exrc and other hidden files in $HOME dir (start with .), they build the user’s basic command-line environment: path, alias, term attributes, initial scripts etc. e.g., I use ‘set -o vi’ (in .bashrc) or ‘export EDITOR=vi’ (in .bash_profile) for quick retrieve, and a lot of ‘alias ‘ to simplify operation.

To distinguish the 2 files is easy:

  • .bash_profile is executed automatically when user login,
    When we login (type username and password) via console, or via ssh, .bash_profile is executed to configure command-line env, setup path before the initial command prompt.
  • After login, when running scripts or opening new term, .bashrc is executed automatically.
    If we’ve already logged into Linux and open a new terminal window (xterm) inside Gnome or KDE or via term like vt220, then .bashrc is executed before the window command prompt.

    Or, when a script with shebang at the first line of the script, it hides to call .bashrc to initialize a new env for script to running.

    #! /bin/bash

Why two different files?

Say, you’d like to print some lengthy diagnostic information about your machine each time you login (load average, memory usage, current users, etc). You only want to see it on login, so you only want to place this in your .bash_profile. If you put it in your .bashrc, you’d see it every time you open a new terminal window.

Most of the time you don’t want to maintain two separate config files for login and non-login shells — when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc from your .bash_profile file, then putting PATH and common settings in .bashrc.
To do this, add the following lines to .bash_profile:

if [ -f ~/.bashrc ]; then
   . ~/.bashrc
fi

Linux system performances: 10 tips

While developing web application in LAMP, it is always helpful to maintain and monitor the server-side: not only the web server and database server, but the Linux system itself. I extracted from web: Five Linux performance commands every admin should know, also add 5 of my points.

1. top
The first stop for many system administrators, the top command shows the current tasks being serviced by the kernel as well as some broad statistical data about the state of your host. By default, the top command automatically updates this data every five seconds (this update period is configurable).

The top command is also incredibly fully featured (albeit that no one uses half the features available). The keystroke you should start with first is h, for “Help” (the man page is also excellent). The help displayed quickly shows that you can add and subtract fields from the display as well as change the sort order. You can also kill or nice particularly processes using k and r respectively.

The top command shows the current uptime, the system load, the number of processes, memory usage and those processes using the most CPU (including a variety of pieces of information about each process such as the running user and the command being executed).

2. vmstat
The vmstat command gives you a snaptop of current CPU, IO, processes, and memory usage. Like the top command it dynamically updates and can be executed like:
$ vmstat 10
Where the delay is the time between updates in seconds, here 10 seconds. The vmstat command writes the results of the check until terminated with Ctrl-C (or you can specify a limit on the command line when vmstat is executed). This continuous output is sometimes piped by people into files for trending performance but we’ll see some better ways of doing that later in this tip.

The first columns show processes – the r column is the processes waiting for run time and b column is any processes in uninterruptible sleep. If you have a number of processes waiting here that means you’ve probably got a performance bottleneck somewhere. The second columns show memory: virtual, free, buffer and cache memory. The third columns are swap and shows the amount of memory swapped from and to the disk. The fourth columns are I/O and show blocks received and sent to block devices.

The last two columns are show system and CPU related information. The system columns show the number of interrupts and context switches per second. The CPU columns are particularly useful. Each column entry shows a percentage of CPU time. The column entries are:

us: The time spent running user tasks/code.
sy: The time spent running kernel or system code.
id: Idle time
wa: The time spent waiting for IO
st: Time stolen from a virtual machine.

The vmstat is good for seeing patterns in CPU usage although remember each entry is generated depending on the delay and that short term CPU monitoring may tell you little about actual CPU problems. You need to see long term trending (see below) to get true insight into CPU performance.

3. iostat
The next command we’re going to look at is iostat. The iostat command (provided via the sysstat package on Ubuntu and Red Hat/Fedora) provides three reports: CPU utilization, device utilization, and network file system utilization. If you run the command without options it will display all three reports, you can specify the individual reports with the -c, -d and -h switches respectively.

In the above image you can see two of these reports, the first is CPU utilization, breaks the average CPU into category by percentage. You can see user processes, system processes, I/O wait and idle time.

The second report, device utilization, shows each device attached to the host and returns some useful information about transfers per second (tps) and block reads and writes and allows you to identify devices with performance issues. You can specify the -k or -m switches to display the statistics in kilobytes and megabytes respectively rather than blocks, which might be easier to read and understand in some instances.

The last report, not pictured, shows similar information to the device utilization report, except for network mounted filesystems rather than directly attached devices.

4. free
The next command, free, shows memory statistics for both main memory and swap.

You can also display a total memory amount by specifying the -t switch and you can display the amounts in bytes by specifying the -b switch and megabytes using the -m switch (it displays in kilobytes by default).

Free can also be run continuously using the -s switch with a delay specified in seconds:

$ free -s 5

This would refresh the free command’s output every 5 seconds.

5. sar
Like the other tools we’ve looked at sar is a command line tool for collecting, viewing and recording performance data. It’s considerably more sophisicated than all the tools we’ve looked at previously and can collect and display data over longer periods. It is installed on Red Hat and Ubuntu via the sysstat package. Let’s start by running sar without any options and examining the output:

$ sar

CPU statistics for every 10 minutes and a final average. This is drawn from a daily statistics file that is collected and rolled every 24 hours (the files are stored in the directory /var/log/sa/ and named saxx where xx is the day they were collected). Also collected are statistics on memory, devices, network, and a variety of others metrics (for example use the -b switch to see block device statistics, the -n switch to see network data and the -r switch to see memory utilization). You can also specify the -A switch to see all collected data.

You can also run sar and output data to another file for longer periods of collection. To do this we specify the -o switch and filename, the interval between gathering (remembering gathering data can have a performance impact too, so make sure the interval isn’t too short) and the count – how many intervals to record. If you omit the count then sar will collect continuously, for example:

$ sar -A -o /var/log/sar/sar.log 600 >/dev/null 2>&1 &

Here we’re collecting all data (-A), logging to the /var/log/sar/sar.log file, collecting every 600 seconds or five minutes continuously and then backgrounding the process. If we then want to display this data back we can use the sar command with the -f switch like so:

$ sar -A -f /var/log/sar/sar.log

This will display all the data collected whilst the sar job was running. You can also take and graph sar data using tools like ksar and sar2rrd.

This is a very basic introduction to sar. There is a lot of data available from sar, and it can be a powerful way to review the performance of your hosts. I recommend reviewing sar’s man page for further details of the metrics sar can collect.

6.ps
$ ps -ef
show processes of all users in details.
$ ps -af | grep 3306
to monitor mysql database server.

7. netstat
Displays protocol statistics and current TCP/UP network connections.
$ netstat -rn
display routing table, addresses and port number in numerical form.

8. df
df reports file system disk space usage, such as:
$ df -k
Sometimes increasing log files or coredump files eat huge disk space, make system performance dramatic down. use this command to see how much space are available or used.

9. du
estimate file space usage. e.g.
$ du -ks
summarize display current local space with block size=1K.

10. tcpdump
tcpdump is a common packet analyzer that runs under the command line. It allows the user to intercept and display TCP/IP and other packets being transmitted or received over a network to which the computer is attached.