Thursday, February 17, 2011

Zimbra authentication preauth in Ruby


In one of our project, we have integrated Zimbra Collabration Server. It’s an email and calendar server plus much more; think about it like a next-generation Microsoft Exchange server. In addition to email and calendar, it provides document storage and editing, instant messaging, and simplified administrative controls all in an award winning webmail user interface built with the latest AJAX web technology. ZCS also provides mobility and syncs to desktop client applications; the server is deployed on commodity Linux and Mac server hardware.

Previosuly it was part of Yahoo, now it is part of Vmware.

Zimbra provides two types of authentication mode for single sign on,
1. URL based
2. SOAP based

The concept behind this is same is to generate pre authentication token called preauth.

As we have integrated into ruby on rails application, I have wrote a code for generate preauth.
Following is link for same.

http://wiki.zimbra.com/wiki/Preauth#Sample_Ruby_code_for_computing_the_preauth_value

Wednesday, February 16, 2011

Uninstall all the gems


Many time a rails developer have an situation where they want to uninstall all gems which are installed.
Here we can take an advantage of pipelining output from one command to other command.
gem clean
This command will remove multiple versions of gems and will keep only higher version of gem.
gem list --no-version | xargs gem uninstall
Here we are using pipelining, where command first will list all gems with no version, and second command will take it for consume.

Extra :

If you need to exclude some gem from being uninstall, use following command,
gem list --no-version | grep -v "rake" | xargs rvmsudo gem uninstall

Sunday, February 13, 2011

Monitor Your Rails/Passenger App with Munin

This article is originally copied from Steve Schwartz blog post.

I have only modified in sense of accessing graphs using nginx as sub directory.  

Munin is a great tool to monitor resources on your server, showing graphs over time, so that you can analyze resource trends to find what’s killing your server before it causes major problems. It is also very configurable and can be made to profile and graph just about anything via plugins. And with a couple tricks, you can get it to monitor your Phusion Passenger application with ease.
Update: If you have RVM installed on your server and need Munin to work with RVM’s Passenger installation, follow all of the instructions below, and then make the changes described in Monitor Passenger with Munin when using RVM(http://tech.tjscreations.com/web/monitor-passenger-with-munin-when-using-rvm/).*
These changes will still use your server’s non-RVM-installed default Ruby to run the Munin plugin, which will in turn use your RVM-installed Ruby to run the passenger-status and passenger-memory-stats commands.
If you already have Munin installed and working, and just want to know how to get the Passenger Plugins working, you can skip directly to Install Munin Passenger Plugins, Configure Munin for Passenger Stats, and be sure to read the Gotcha.

Install Munin and Munin-node

The first step is to install Munin. If you have your server running Ubuntu, this is pretty easy. One you’ve SSH’d into your server, enter:
sudo apt-get install munin munin-node -y
If you’re running another flavor of Linux, see the Munin’s Linux install instructions. For Mac OSX, see Mac install instructions.

Install Munin Passenger Plugins

The next step is to install the Passenger plugins. The first is passenger_status:
wget http://gist.github.com/20319.txt
Modify passenger-status path in file 20319.txt, as per your path of passenger-status.
Tip: Use sudo find / -name passenger-status
sudo mv 20319.txt /usr/share/munin/plugins/passenger_status
sudo chmod a+x /usr/share/munin/plugins/passenger_status
sudo ln -s /usr/share/munin/plugins/passenger_status /etc/munin/plugins/passenger_status
The second plugin is passenger_memory_stats:
wget http://gist.github.com/21391.txt
Modify passenger_memory_stats path in file 21391.txt, as per your path of passenger_memory_stats.
Tip: Use sudo find / -name passenger_memory_stats
sudo mv 21391.txt /usr/share/munin/plugins/passenger_memory_stats
sudo chmod a+x /usr/share/munin/plugins/passenger_memory_stats
sudo ln -s /usr/share/munin/plugins/passenger_memory_stats /etc/munin/plugins/passenger_memory_stats
No go ahead and restart Munin-node (this is the process that runs Munin at regular intervals):
sudo /etc/init.d/munin-node restart

Configure Munin

Now here are where those couple of tricks come in to get Munin playing nicely with your Rails application. First we want to tell Munin where to store the html and graph images that you can access through the browser.
sudo nano /etc/munin/munin.conf
And change the following lines:
htmldir /path/to/your/rails/public/munin
[yoursite.com]
Note that the htmldir can really be any directory, but make sure it’s persistent (i.e. if you’re using Capistrano to keep revisions of your app on the server, make sure the munin directory is in the shared directory outside of your rails app root directory).
Also note that the [yoursite.com] part is only a descriptive name, so it really doesn’t matter what you call it. If you have a more complex application that runs from multiple directories or multiple servers, you can group Munin stats, and in this case, you actually have to put some thought into this line. But that’s outside the scope of this article, so you can read up on customizing Munin Master on your own time if you’d like.
Now if you haven’t already, you need to actually create that directory you just told Munin about:
cd /path/to/your/rails/public
mkdir -p munin
sudo chown munin:munin munin

Configure Munin For Passenger Stats

And finally, you need to allow the munin user to run the passenger-status and passenger-memory-stats commands without a password, since they both require sudo powers to run properly.
sudo visudo
And at the bottom of the file add:
munin   ALL=(ALL) NOPASSWD:/usr/bin/passenger-status, /usr/bin/passenger-memory-stats

Gotcha

At this point, Munin is suppose to start doing it’s stuff and all is happy in the ruby-munin marriage. For me, however, this was not the case. After combing the Munin error logs and digging through the Munin documentation and code more than I care to admit, I realized that Munin-node needs to preface the passenger stat commands with ruby. So, here’s how we fix that:
sudo nano /etc/munin/plugin-conf.d/munin-node
And then add this to the bottom of that file:
[passenger_*]
user munin
command ruby %c

Final Munin Restart

Now we’ll give Munin-node one more restart and we’re in business.
sudo /etc/init.d/munin-node restart
After waiting a few minutes, you should start to see .html files and graphs and whatnot in the /path/to/your/rails/shared/directory/munin directory. If not, you want to check out the Munin-node error logs to see what’s going on. On Ubuntu, this is found at /var/log/munin/munin-node.log.

Access Munin

http(s)://your-rails-app/munin/index.html 

Access munin as a part of rails aplication using nginx

You can access munin as part of you rails application using nginx's location directive
Add following lines into your rails application server tag of nginx,
location /munin {
  root path_to_your_munin_folder;
  index index.html index.htm;
}

Reference

http://www.alfajango.com/blog/how-to-monitor-your-railspassenger-app-with-munin#install-munin-passenger-plugins

REE garbage collector performance tuning


Ruby’s garbage collector tries to adapt memory usage to the amount of memory used by the program by dynamically growing or shrinking the allocated heap as it sees fit. For long running server applications, this approach isn’t always the most efficient one. The performance very much depends on the ratio heap_size / program_size. It behaves somewhat erratic: adding code can actually make your program run faster.
With REE, one can tune the garbage collector’s behavior for better server performance. It is possible to specify the initial heap size to start with. The heap size will never drop below the initial size. By carefully selecting the initial heap size one can decrease startup time and increase throughput of server applications.
Garbage collector behavior is controlled through the following environment variables. These environment variables must be set prior to invoking the Ruby interpreter.
  • RUBY_HEAP_MIN_SLOTS
    This specifies the initial number of heap slots. The default is 10000.
  • RUBY_HEAP_SLOTS_INCREMENT
    The number of additional heap slots to allocate when Ruby needs to allocate new heap slots for the first time. The default is 10000.
    For example, suppose that the default GC settings are in effect, and 10000 Ruby objects exist on the heap (= 10000 used heap slots). When the program creates another object, Ruby will allocate a new heap with 10000 heap slots in it. There are now 20000 heap slots in total, of which 10001 are used and 9999 are unused.
  • RUBY_HEAP_SLOTS_GROWTH_FACTOR
    Multiplicator used for calculating the number of new heaps slots to allocate next time Ruby needs new heap slots. The default is 1.8.
    Take the program in the last example. Suppose that the program creates 10000 more objects. Upon creating the 10000th object, Ruby needs to allocate another heap. This heap will have 10000 * 1.8 = 18000 heap slots. There are now 20000 + 18000 = 38000 heap slots in total, of which 20001 are used and 17999 are unused.
    The next time Ruby needs to allocate a new heap, that heap will have 18000 * 1.8 = 32400 heap slots.
  • RUBY_GC_MALLOC_LIMIT
    The amount of C data structures which can be allocated without triggering a garbage collection. If this is set too low, then the garbage collector will be started even if there are empty heap slots available. The default value is 8000000.
  • RUBY_HEAP_FREE_MIN
    The number of heap slots that should be available after a garbage collector run. If fewer heap slots are available, then Ruby will allocate a new heap according to the RUBY_HEAP_SLOTS_INCREMENT and RUBY_HEAP_SLOTS_GROWTH_FACTOR parameters. The default value is 4096.

GC configuration With respect to my portal

As mentioned into coffeepowdered.net(reference link), we have used scrap plugin.
It's given following statistic
Number of objects : 2181349 (1735589 AST nodes, 79.56%)
Heap slot size : 40
GC cycles so far : 415
Number of heaps : 9
Total size of objects: 85208.95 KB
Total size of heaps : 119579.90 KB (34370.95 KB = 28.74% unused)
Leading free slots : 389261 (15205.51 KB = 12.72%)
Trailing free slots : 2 (0.08 KB = 0.00%)
Number of contiguous groups of 16 slots: 38143 (19.94%)
Number of terminal objects: 6617 (0.22%)
As, you see in the statistic, we required 9 heaps slot to feet 2181349 objects. To reduce this to feet into single heap we have added following configuration.
1. Created a file with any name. e.g
sudo vi /opt/ruby-enterprise-1.8.7-2010.02/bin/ruby_with_env
2. Added following line in it
#!/bin/bash
export RUBY_HEAP_MIN_SLOTS=2500000
export RUBY_HEAP_SLOTS_INCREMENT=200000
export RUBY_HEAP_SLOTS_GROWTH_FACTOR=1
export RUBY_GC_MALLOC_LIMIT=40000000
export RUBY_HEAP_FREE_MIN=25000
exec "/opt/ruby-enterprise-1.8.7-2010.02/bin/ruby" "$@" 
3. Given execute permission to it
sudo chmod +x /opt/ruby-enterprise-1.8.7-2010.02/bin/ruby_with_env
4. As we using passenger, so we have modified nginx.conf to use above configuration,
passenger_ruby /opt/ruby-enterprise-1.8.7-2010.02/bin/ruby;
with
passenger_ruby /opt/ruby-enterprise-1.8.7-2010.02/bin/ruby_with_env;
After doing this configuration, we got following statistic,
Number of objects : 2093655 (1698545 AST nodes, 81.13%)
Heap slot size : 40
GC cycles so far : 14
Number of heaps : 1
Total size of objects: 81783.40 KB
Total size of heaps : 97656.29 KB (15872.89 KB = 16.25% unused)
Leading free slots : 406346 (15872.89 KB = 16.25%)
Trailing free slots : 0 (0.00 KB = 0.00%)
Number of contiguous groups of 16 slots: 25396 (16.25%)
Number of terminal objects: 6993 (0.28%)
As you can see two important variables are come down drastically.
GC cycles so far : 14
Number of heaps : 1



Reference

http://www.coffeepowered.net/2009/06/13/fine-tuning-your-garbage-collector
http://www.mikeperham.com/2009/05/25/memory-hungry-ruby-daemons/


Monitor rails instances of passenger in Nginx


A month back, we have faced an issue of memory out of space which leads the server crash. We have solved this issue with help of following script. This script is sending an kill signal  rails instance which is taking a more memory. Due to this crash will be get notified at client side.

This script is use full for monitoring rails instance of passenger. As other available tools are not able to monitor rails instance which is maintained by passenger.

It will monitor rails instance such that, it will kill rails instance which is taking more than 500MB and if rails instance has processed 200 requests.

After killing rails instance, passenger will automatically fork another rails instance if required.

The reason of restarting instance after certain requests, to keep memory available for other rails instances. Since I have found an articles which are saying "Rails expands the Ruby process so much that additional memory allocation grows much larger than we actually need, due to the exponential growth factor. And since MRI never gives back unused memory"

I have saw passenger has PassengerMaxRequests and PassengerMaxMemory (Not sure) for Apache server but not available for nginx.

My script will do the same thing for nginx :).


Open any file,
e.g
vi monitor_rails_instance
and paste following code in it.
 1 #!/bin/sh
 2 
 3 while true; do
 4 
 5   passenger-memory-stats | grep Rails:\ /home  | awk ' { if($2 > 500) print "kill -9 " $1}' | bash
 6   # above line will kill all the rails instances which are using memory more than 500MB
 7 
 8   passenger-status | grep Processed:  | awk ' { if($7 > 200) print "kill -6 " $3}' | bash
 9   # above line will abort all rails instance who have processed 200 request. 
10 
11   sleep 2
12 done
Then give execute permission to the file
sudo chmod +x monitor_rails_instance
And then run this script as super user
sudo ./monitor_rails_instance

Note:

If you want to run this script as background process on server
sudo nohup ./monitor_rails_instance

Friday, February 11, 2011

Streaming replication in postgresql 9.1

Following document will tell you how to steup streaming replication for postgresql server 9.1.

We are using it for failover and high avilability of database for rails application.

Note : If you need an installation procedure for postgresql server 9.1, please refer this link. 

A. Set up primary ( master ) server as per postgresql server installation document

1. Open postgres.conf from data folder and made changes as per this,

listen_addresses = '10.50.4.91'  # IP address        

wal_level = hot_standby  # Option are ( host_standby, warm standby etc...)
#Need to study the other option and their implications

max_wal_senders = 5

wal_keep_segments = 32
# Need to explore
# Follwing commands are used earlier than 9.1 where replication works via xlog and    
# archive-restore command.
#archive_mode    = on
#archive_command = 'cp %p /path_to/archive/%f'

2. Open pg_hba.conf and add authentication for self machine and for stand by machine

host    all             all             10.50.4.91/22           trust
host    replication     postgres        10.50.4.58/22           trust
Note : Replication signifies the slave will replicate for all DB on master

B. Set up stand-by machine(for e.g here 10.50.4.58) as per postgres installation document till step 10.1 (Don't do 10.2 as we are copying data folder from primary)

C. On Primary log in as postgres
Start the postgresql server and then do the following

psql -Upostgres -h10.50.4.91 -c "SELECT pg_start_backup('label', true)"
rsync -a /usr/local/pgsql/data/ 10.50.4.58:/usr/local/pgsql/data/ --exclude postmaster.pid
psql -Upostgres -h10.50.4.91 -c "SELECT pg_stop_backup()"

D. Step 3 will create a data folder inside /usr/local/pgsql

on standby
1. Edit postgresql.conf of standby
a. Change listen_address as per standby
listen_addresses = '10.50.4.58'

b. comment out the rest of part in postgresql.conf (which we have done on primary server)

E. Enable read-only queries on the standby server. But if wal_level is archive on the primary, leave hot_standby unchanged (i.e., off).

hot_standby = on  #In postgres.conf
   
F. Edit nano /usr/local/pgsql/data/pg_hba.conf for your connection if required

for general setting

modify last two line(as we copied from primary)

host    all             all             10.50.4.58/22           trust
#host    replication     postgres        10.50.4.58/22           trust   

G. Add /usr/local/pgsql/data/recovery.conf on standby with following content

standby_mode          = 'on'
primary_conninfo      = 'host=10.50.4.91 port=5432 user=postgres'
trigger_file = '/tmp/trigger'   #Optional but need to be explore
#restore_command = 'cp /path_to/archive/%f "%p"' #Optional but need to be explore

H. Start the postgresql server of standby

Troubleshoot and verification:
You can see postgresql log for any issue or to see wether streaming replication is working or not.

Reference:
http://wiki.postgresql.org/wiki/Streaming_Replication

Primary Observations:
1. Even those standby goes down it will pickup a changes from primary at the time of startup.
2. You can not do a any modification on standby.
3. On restart any machine aromatically postgresql services get loaded

Important reference:
    http://wiki.postgresql.org/wiki/Main_Page
    Go to section Database Administration and Maintenance

Postgresql server 9.1 installation from source code


This document will tell you how to install postgresql server 9.1 from source code.

1. Download postgres 9.1
wget http://ftp9.us.postgresql.org/pub/mirrors/postgresql/source/v9.0.1/postgresql-9.0.1.tar.bz2

2. Extract postgres
tar xjvf postgresql-9.0.1.tar.bz2

3. Install following dependent packages which are needed for installation
sudo apt-get install build-essential libreadline-dev zlib1g-dev 

4. To install, go inside of the extracted folder
./configure
make
sudo make install # This will install postgresql in /usr/local/pgsql 

5. Copy server startup script under services
sudo cp contrib/start-scripts/linux /etc/init.d/postgresql

6. Then make the startup script executable.
sudo chmod 775 /etc/init.d/postgresql  

7. add the script to the Server's startup routine (init) with update-rc.d like 
 sudo update-rc.d postgresql defaults 

8. Add A Postgres User (service account) 
Now, we need to add a postgres user. This user runs the postgresql server. Postgres will not run as root. 
sudo adduser postgres --home /usr/local/pgsql 

9. Add Paths to Binaries and Man Pages 
1. sudo nano /etc/profile.d/postgresql.sh 
add lines

PATH=$PATH:/usr/local/pgsql/bin
export PATH

2. sudo nano /etc/profile.d/pgmanual.sh 
add lines

MANPATH=$MANPATH:/usr/local/pgsql/man
export MANPATH

Note : These configs can also be done in /home/<username>/.bashrc

3. Make above file executable 
sudo chmod 775 /etc/profile.d/postgresql.sh
sudo chmod 775 /etc/profile.d/pgmanual.sh 

10. Create the PostgreSQL Database Cluster 
1. Make a directory to contain the databases 
sudo mkdir /usr/local/pgsql/data
sudo chown -R postgres:postgres /usr/local/pgsql/data 

2. Execute the initdb script 
su postgres
initdb -D /usr/local/pgsql/data 

Note: If initdb command not found, use full path i.e '/usr/local/pgsql/bin/initdb'
OR
Load the postgres commands into shell environment using, 

source /etc/profile.d/postgresql.sh 

11. Start and stop serve
sudo /etc/init.d/postgresql start
sudo /etc/init.d/postgresql stop 

Reference: 
For this I have referred a following link. Actually this link shows installation of  postgres 8.3.7 but you can change it appropriately
http://www.xtuple.org/InstallingPostgresFromSource

Sunday, February 6, 2011

First blog post

This is my first blog post, where I will tell u about some boring things of me. 

  I have three years of technology specific work experience. And have learned many thing in this past three years. Since long time, I was thinking to write a blog about my work and learning experience. But I never tried for that, instead was maintaining Google docs. On this great night I have decided to write a blog now. Actually it is late to start, but ..... JUST DO IT YAARRR!!!!

Very soon I will collect my earlier work experience from Google docs. ;-)