For Login shells (subject to the -noprofile option):
On logging in:
If `/etc/profile' exists, then source it.
If `~/.bash_profile' exists, then source it,
else if `~/.bash_login' exists, then source it,
else if `~/.profile' exists, then source it.
On logging out:
If `~/.bash_logout' exists, source it.
For non-login interactive shells (subject to the -norc and -rcfile options):
On starting up:
If `~/.bashrc' exists, then source it.
For non-interactive shells:
On starting up:
If the environment variable `ENV' is non-null, expand the variable and source the file named by the value. If Bash is not started in Posix mode, it looks for `BASH_ENV' before `ENV'.
So, typically, your `~/.bash_profile' contains the line
`if [ -f `~/.bashrc' ]; then source `~/.bashrc'; fi' after (or before) any login specific initializations.
If Bash is invoked as `sh', it tries to mimic the behavior of `sh' as closely as possible. For a login shell, it attempts to source only `/etc/profile' and `~/.profile', in that order. The `-noprofile' option may still be used to disable this behavior. A shell invoked as `sh' does not attempt to source any other startup files.
When Bash is started in POSIX mode, as with the `-posix' command line option, it follows the Posix 1003.2 standard for startup files. In this mode, the `ENV' variable is expanded and that file sourced; no other startup files are read.
Saturday, September 15, 2012
Friday, September 14, 2012
Clean setup ruby ree on Mountain Lion
http://blog.teamtreehouse.com/installing-ruby-rails-and-mysql-on-os-x-lion
http://ryanbigg.com/2011/06/mac-os-x-ruby-rvm-rails-and-you/
http://coderwall.com/p/fywjrw
xcode - app store
(need to install after download completed: open xcode in application, ...)
xocde - go to preference -> download -> command line tools
(try $gcc -v to ensure it works.)
Install the followings using su login:
homebrew - http://mxcl.github.com/homebrew/
gcc (non-LLVM) -
brew tap homebrew/dupes
brew install apple-gcc42
xquartz - apple's X11
http://xquartz.macosforge.org/landing/
git - brew install git
Install the followings using local login:
rvm - $ curl -L https://get.rvm.io | bash -s stable --ruby
CPPFLAGS=-I/opt/X11/include CC=/usr/local/bin/gcc-4.2 rvm install --force ree
Install the followings using su login:
postgresql - brew install postgresql
or
mysql - brew install mysql
http://ryanbigg.com/2011/06/mac-os-x-ruby-rvm-rails-and-you/
http://coderwall.com/p/fywjrw
xcode - app store
(need to install after download completed: open xcode in application, ...)
xocde - go to preference -> download -> command line tools
(try $gcc -v to ensure it works.)
Install the followings using su login:
homebrew - http://mxcl.github.com/homebrew/
gcc (non-LLVM) -
brew tap homebrew/dupes
brew install apple-gcc42
xquartz - apple's X11
http://xquartz.macosforge.org/landing/
git - brew install git
Install the followings using local login:
rvm - $ curl -L https://get.rvm.io | bash -s stable --ruby
CPPFLAGS=-I/opt/X11/include CC=/usr/local/bin/gcc-4.2 rvm install --force ree
Install the followings using su login:
postgresql - brew install postgresql
or
mysql - brew install mysql
Tuesday, September 11, 2012
Ruby - Extends module class methods using 'extend self'
module A
extend self
def say_hello
'A say hello'
end
end
module B
extend A
end
A.say_hello # => "A say hello"
B.say_hello # => "A say hello"
extend self
def say_hello
'A say hello'
end
end
module B
extend A
end
A.say_hello # => "A say hello"
B.say_hello # => "A say hello"
Ruby - use include to append both class and instance methods
module B
def self.included(base)
base.extend ClassMethods
end
def say_hello
'instance hello'
end
module ClassMethods
def say_hello
'class hello'
end
end
end
class A
include B
end
A.say_hello # => "class hello"
A.new.say_hello # => "instance hello"
def self.included(base)
base.extend ClassMethods
end
def say_hello
'instance hello'
end
module ClassMethods
def say_hello
'class hello'
end
end
end
class A
include B
end
A.say_hello # => "class hello"
A.new.say_hello # => "instance hello"
Case study - Hardware for etsy.com
Source: http://codeascraft.etsy.com/2012/08/31/what-hardware-powers-etsy-com/
What Hardware Powers Etsy.com?
Traditionally, discussing hardware configurations when running a large website is something done inside private circles; and normally to discuss how vendor X did something very poorly, and vendor Y’s support sucks.
With the advent of the “cloud”, this has changed slightly. Suddenly people are talking about how big their instances are, and how many of them. And I think this is a great practice to get in to with physical servers in datacenters too. After all, none of this is intended to be some sort of competition; it’s about helping out people in similar situations as us, and broadcasting solutions that others may not know about… pretty much like everything else we post on this blog.
The great folk at 37signals started this trend recently by posting about their hardware configurations after attending Velocity conference… one of the aforementioned places where hardware gossiping will have taken place.
So, in the interest of continuing this trend here’s the classes of machine we use to power over $69.5 million of sales for our sellers in July
Database Class
As you may already know, we have quite a few pairs of MySQL machines to store our data, and as such we’re relying on them heavily for performance and (a certain amount of) reliability.
For any job that requires an all round performant box, with good storage, good processing power, and a good level of redundancy we utilise HP DL380 servers. These clock in at 2U of rack space, 2x 8 core Intel E5630 CPUs (@ 2.53ghz), 96GB of RAM (for that all important MySQL buffer cache) and 16x 15,000 RPM 146GB hard disks. This gives us the right balance of disk space to store user data, and spindles/RAM to retrieve it quickly enough. The machines have 4x 1gbit ethernet ports, but we only use one.
Why not SSDs?
We’re just starting to test our first round of database class machines with SSDs. Traditionally we’ve had other issues to solve first, such as getting the right balance of amount of user data (e.g. the amount of disk space used on a machine) vs the CPU and memory. However, as you’ll see in our other configs, we have plenty of SSDs throughout the infrastructure, so we certainly are going to give them a good testing for databases too.

A picture of our various types of hardware, with the HP to the left/middle and web/utility boxes on the right
Web/Gearman Worker/Memcache/Utility/Job Class
This is a pretty wide catch all, but in general we try and favour as few machine classes as possible, so a lot of our tasks from handling web traffic (Apache/PHP) to any box that performs a task where there are many of them/redundancy is solved at the app level we generally use one type of machine. This way hardware re-use is promoted and machines can change roles quickly and easily. Having said that, there are some slightly different configurations in this category for components that are easy to change, e.g. amount of memory and disks.
We’re pretty much in love with this 2U Supermicro chassis which allows for 4x nodes that share two power supplies and 12 3.5″ disks on the front of the chassis
A general configuration for these would be 2x 8 core Intel E5620 CPUs (@ 2.40ghz), 12GB-96GB of RAM, and either a 600GB 7200pm hard disk or an Intel 160GB SSD.
Note the lack of RAID on these configurations; We’re pretty heavily reliant on Cobbler and Chef, which means rebuilding a system from scratch takes just 10 minutes. In our view, why power two drives when our datacenter staff can replace the drive and rebuild the machine and have it back in production in under 20 minutes? Obviously this only works where it is appropriate; clusters of machines where the data on each individual machine is not important. Web servers, for example, have no important data since logs are sent constantly to our centralised logging host, and the web code is easily deployed back on to the machine.
We have Nagios checks to let us know when the filesystem becomes un-writeable (and SMART checks also), so we know when a machine needs a new disk.
Each machine has 2x 1gbit ethernet ports, in this case we’re only using one.
Hadoop
In the last 12 months we’ve been working on building up our Hadoop cluster, and after evaluating a few hardware configurations ended up with a very similar chassis design to the one used above. However, we’re using a chassis with 24x 2.5″ disk slots on the front, instead of the 12x 3.5″ design used above.
Each node (with 4 in a 2U chassis) has 2x 12 core Intel E5646 CPUs (@ 2.40ghz), 96GB of RAM, and 6x 1Tb 2.5″ 7200rpm disks. That’s 96 cores, 384GB of RAM and 24TB per 2U of rack space.
Our Hadoop jobs are very CPU heavy, and storage and disk throughput is less of an issue hence the small amount of disk space per node. If we had more I/O and storage requirements, we had also considered 2U Supermicro servers with 12x 3.5″ disks per node instead.
As with the above chassis, each node as 2x 1gbit ethernet ports, but we’re only utilising one at the minute.

This graph illustrates the power usage on one set of machines showing the difference between Hadoop jobs running and not
Search/Solr
Just a month ago, this would’ve been grouped into the general utility boxes above, but we’ve got something new and exciting for our search stack. Using the same chassis as in our general example, but this time using the awesome new Sandy Bridge line of Intel CPUs. We’ve got 2x 16 core Intel E5-2690 CPUs in these nodes, clocked at 2.90ghz, which gives us machines that can handle over 4 times the workload of the generic nodes above, whilst using the same density configuration and not that much more power. That’s 128x 2.9ghz CPU cores per 2U (granted, that includes HyperThreading).
This works so well because search is really CPU bound; we’ve been using SSDs to get around I/O issues in these machines for a few years now. The nodes have 96GB of RAM and a single 800GB SSD for the indexes. This follows the same pattern of not bothering with RAID; The SSD is perfectly fast enough on it’s own, and we haveBitTorrent index distribution which means getting the indexes to the machine is super fast.
Less machines = less to manage, less power, and less space.
Backups
Supermicro wins this game too. We’re using the catchily named 6047R-E1R36N. The 36 in this model number is the important part… this is a 4u chassis, with 36x 3.5″ disks. We load up these chassis with 2TB 7200rpm drives, which when coupled with an LSI RAID controller with 1gb of battery backed write back cache gives a blistering 1.2 gigabytes/second sequential write throughput and a total of 60TB of usable disk space across two RAID6 volumes.

36 disk Supermicro chassis. Note the disks crammed into the back of the chassis as well as the front!
Why two RAID6 volumes? Well, it means a little more waste (4 drives for parity instead of 2) but as a result of that you do get a bit more resiliency against losing a number of drives, and rebuild times are halved if you just lose a single drive. Obviously RAID monitoring is pretty important, and we have checks for either SMART (single disk machines) or the various RAID utilities on all our other machines in Nagios.
In this case we’re taking advantage of the 2x 1gbit ethernet connections, bonded together to the switch to give us redundancy and the extra bandwidth we need. In the future we may even run fiber to these machines, to get the full potential out of the disks, but right now we don’t get above a gigabit/second for all our backups.
Special cases
Of course there are always exception to the rules. The only other hardware profile we have is HP DL360 servers (1u, 4x 2.5″ 15,000rpm 146GB SAS disks) which is for roles that don’t need much horsepower, but we deem important enough to have RAID. For example, DNS servers, LDAP servers, and our Hadoop Namenodes are all machines that don’t require much disk space, but need RAID for extra data safety than our regular single disk configurations.
Networking
I didn’t go into too much detail on the networking side of things in this post. Consider this part 1, and watch this space for our networking gurus to take you through our packet shuffling infrastructure at a later date.
Monday, September 10, 2012
Case study - Ruby on Rails hardware spec
Source:http://37signals.com/svn/posts/3202-behind-the-scenes-the-hardware-that-powers-basecamp-campfire-and-highrise
Application Servers
All of our Ruby/Rails application roles run on Dell C Series 5220 servers. We chose C5220 servers because they provide high density, high performance, and low cost compute sleds at a decent cost point. The C5220 sleds replaced invidual Dell R710 servers which consumed a greater amount of power and rack space in addition to offering expandability we were not utilizing.
All of our Ruby/Rails application roles run on Dell C Series 5220 servers. We chose C5220 servers because they provide high density, high performance, and low cost compute sleds at a decent cost point. The C5220 sleds replaced invidual Dell R710 servers which consumed a greater amount of power and rack space in addition to offering expandability we were not utilizing.
We use an 8 sled configuration with E31270 3.40GHz processors, 32/16G of ram, an LSI raid card and 2 non Dell SSDs. (For those of you thinking of ordering these … get the LSI raid card. The built in Intel raid is unreliable.) Each chassis with 8 sleds takes up 4U of rackspace: 3 for the chassis and 1 for cabling.
Job / Utility Servers
We use a combination of C6100 and C6220 servers to power our utility/jobs and API roles. We exclusively use the 4 sled version (of each) which means we get 4 “servers” in 2U. Each sled has 2x X5650 processors, 48-96G of ram, 2-6 ssds, and 4×1G or 1×10G network interfaces. This design allows to have up to 24 disks in a single chassis while consuming the same space as a single R710server (which holds 8 disks max).
We use a combination of C6100 and C6220 servers to power our utility/jobs and API roles. We exclusively use the 4 sled version (of each) which means we get 4 “servers” in 2U. Each sled has 2x X5650 processors, 48-96G of ram, 2-6 ssds, and 4×1G or 1×10G network interfaces. This design allows to have up to 24 disks in a single chassis while consuming the same space as a single R710server (which holds 8 disks max).
Search
For Solr we run R710s filled with SSDs. Each instance varies, but a common configuration is 2x E5530 processors, 48G of ram, 4-8 ssds, and 4×1g network interfaces. For Elastic Search we run a mix of Poweredge 2950 servers and C 5220 sleds with 12-16G of ram and 2×400G ssds in a raid 1.
For Solr we run R710s filled with SSDs. Each instance varies, but a common configuration is 2x E5530 processors, 48G of ram, 4-8 ssds, and 4×1g network interfaces. For Elastic Search we run a mix of Poweredge 2950 servers and C 5220 sleds with 12-16G of ram and 2×400G ssds in a raid 1.
Database and Memcache/Redis Servers
For Database roles we use R710s with 2x X5670 processors, 1.2TB Fusion-IO duo cards and varying amounts of memory. (Varies based on the database size.) We also have a number of older R710s powering Memcache and Redis instances. Each of these has has 2x E5530 processors and 2-4 disks with 4×1G network interfaces.
For Database roles we use R710s with 2x X5670 processors, 1.2TB Fusion-IO duo cards and varying amounts of memory. (Varies based on the database size.) We also have a number of older R710s powering Memcache and Redis instances. Each of these has has 2x E5530 processors and 2-4 disks with 4×1G network interfaces.
Storage
We have around 400TB / 9 nodes of Isilon 36 and 72NL storage. We serve all of the user uploaded content off this storage with backups to S3.
We have around 400TB / 9 nodes of Isilon 36 and 72NL storage. We serve all of the user uploaded content off this storage with backups to S3.
OS Choice
Database servers run RHEL or Centos 6 while application and utility servers run Ubuntu LTS.
Database servers run RHEL or Centos 6 while application and utility servers run Ubuntu LTS.
Sunday, September 9, 2012
How to create a bootable USB stick on Windows
Source: http://www.ubuntu.com/download/help/create-a-usb-stick-on-windows
To run Ubuntu from a USB stick, the first thing you need to do is insert a USB stick with at least 2GB of free space into your PC.
The easiest way to put Ubuntu onto your stick is to use the USB installer provided at pendrivelinux.com. You’ll need to download and install and follow the instructions.
Download Pen Drive Linux's USB Installer
To run Ubuntu from a USB stick, the first thing you need to do is insert a USB stick with at least 2GB of free space into your PC.
The easiest way to put Ubuntu onto your stick is to use the USB installer provided at pendrivelinux.com. You’ll need to download and install and follow the instructions.
Download Pen Drive Linux's USB Installer
Log in with open accounts
Google: https://developers.google.com/accounts/docs/OpenID
Facebook: http://developers.facebook.com/docs/guides/web/
Twitter: https://dev.twitter.com/docs/auth/sign-twitter
OAuth - Open standard for authorization
Ruby Gem - https://github.com/intridea/omniauth
Facebook: http://developers.facebook.com/docs/guides/web/
Twitter: https://dev.twitter.com/docs/auth/sign-twitter
OAuth - Open standard for authorization
Ruby Gem - https://github.com/intridea/omniauth
Friday, September 7, 2012
Rails 3 - Model's nested attributes in form
When a rails form contain a model and its nested attributes, use "accepts_nested_attributes_for" to get parameter from the form.
e.g.
In a car model,
...
has_one :engine_spec
accepts_nested_attributes_for :engine_spec, allow_destroy => true
...
In a car form,
...
<%= form_for(@car, :multipart => true) do |f| %>
<%= f.text_field :model_name %>
...
<%= f.fields_for :engine_spec do |spec| %>
<%= spec.text_field :capacity %>
...
Note: Should declare engine_spec in car controller:
@car.engine_spec = EngineSpec.new
e.g.
In a car model,
...
has_one :engine_spec
accepts_nested_attributes_for :engine_spec, allow_destroy => true
...
In a car form,
...
<%= form_for(@car, :multipart => true) do |f| %>
<%= f.text_field :model_name %>
...
<%= f.fields_for :engine_spec do |spec| %>
<%= spec.text_field :capacity %>
...
Note: Should declare engine_spec in car controller:
@car.engine_spec = EngineSpec.new
Git re-initialize fix after corruption
Today, a project directory was somehow corrupted and cannot add new file nor commit modified files to the local repository.
With error like these:
$ git add .
error: insufficient permission for adding an object to repository database .git/objects
error: clickthecity.rb: failed to insert into database
error: unable to index file clickthecity.rb
fatal: updating files failed
It's found that some directories under .git/objects were owned by root. Maybe some commits were done under the root access.
After a few tries in vein, I do just re-initialize the local git repository.
$ sudo rm -rf .git
$ git init
$ git add .
$ git remote add origin [your_server]:[dir]
$ git commit -a -m "..."
$ git pull origin master
And do some code merging.
$ git push origin master
Done.
With error like these:
$ git add .
error: insufficient permission for adding an object to repository database .git/objects
error: clickthecity.rb: failed to insert into database
error: unable to index file clickthecity.rb
fatal: updating files failed
It's found that some directories under .git/objects were owned by root. Maybe some commits were done under the root access.
After a few tries in vein, I do just re-initialize the local git repository.
$ sudo rm -rf .git
$ git init
$ git add .
$ git remote add origin [your_server]:[dir]
$ git commit -a -m "..."
$ git pull origin master
And do some code merging.
$ git push origin master
Done.
Thursday, August 30, 2012
Bundle exec - Execute a command in the context of the bundle
This command executes the command, making all gems specified in the Gemfile(5) available to require in Ruby programs.
Essentially, if you would normally have run something like rspec spec/my_spec.rb, and you want to use the gems specified in the Gemfile(5) and installed via bundle install(1), you should run bundle exec rspec spec/my_spec.rb.
Note that bundle exec does not require that an executable is available on your shell's $PATH.
Essentially, if you would normally have run something like rspec spec/my_spec.rb, and you want to use the gems specified in the Gemfile(5) and installed via bundle install(1), you should run bundle exec rspec spec/my_spec.rb.
Note that bundle exec does not require that an executable is available on your shell's $PATH.
Wednesday, August 29, 2012
gigaspaces.com - XAP
Application Scaling
- Elastic Application Platform
- Multi-site data replication
- In-memory data grid
- Cloudify - Open PaaS Stack
Friday, August 24, 2012
Ruby Semaphore
Ruby uses the Mutex Class to provide semaphore lock for mutually exclusive access to shared resource.
Example from ruby-doc.org:
Without Semaphore:
count1 = count2 = 0
difference = 0
counter = Thread.new do
loop do
count1 += 1
count2 += 1
end
end
spy = Thread.new do
loop do
difference += (count1 - count2).abs
end
end
sleep 1
Thread.critical = 1
count1 » 184846
count2 » 184846
difference » 58126
With Semaphore:
require 'thread'
mutex = Mutex.new
count1 = count2 = 0
difference = 0
counter = Thread.new do
loop do
mutex.synchronize do
count1 += 1
count2 += 1
end
end
end
spy = Thread.new do
loop do
mutex.synchronize do
difference += (count1 - count2).abs
end
end
end
sleep 1
mutex.lock
count1 » 21192
count2 » 21192
difference » 0
Example from ruby-doc.org:
Without Semaphore:
count1 = count2 = 0
difference = 0
counter = Thread.new do
loop do
count1 += 1
count2 += 1
end
end
spy = Thread.new do
loop do
difference += (count1 - count2).abs
end
end
sleep 1
Thread.critical = 1
count1 » 184846
count2 » 184846
difference » 58126
With Semaphore:
require 'thread'
mutex = Mutex.new
count1 = count2 = 0
difference = 0
counter = Thread.new do
loop do
mutex.synchronize do
count1 += 1
count2 += 1
end
end
end
spy = Thread.new do
loop do
mutex.synchronize do
difference += (count1 - count2).abs
end
end
end
sleep 1
mutex.lock
count1 » 21192
count2 » 21192
difference » 0
Thursday, August 23, 2012
Use Mocha to stub a class method and raise an exception
$> gem install mocha
SomeClass.any_instance.stubs(some_method).raises(SomeException)
SomeClass.any_instance.stubs(some_method).raises(SomeException)
Extend Ruby core class in Rails application
Follow the ActiveSupport convention, create extension file in lib/core_extensions/[class].rb.
e.g. lib/core_extensions/date.rb
Then, require the file when needed.
e.g. require 'lib/core_extensions/date'
Or put the require statement in any of the config/initializers/ file.
e.g. lib/core_extensions/date.rb
Then, require the file when needed.
e.g. require 'lib/core_extensions/date'
Or put the require statement in any of the config/initializers/ file.
Wednesday, August 22, 2012
Use Rake to run selected test files
To run specific test file(s) using Rake, one may create a rake file : lib/tasks/individual_test.rake
# Run specific tests for development
namespace :test do
desc "Run tests related to request."
task :request => :environment do
files = %w(
test/unit/action_request/suspension_request_test.rb
test/unit/action_request/request_base_test.rb
test/unit/partials/action_request_test.rb
)
files.each do |full_filename|
sh "ruby -Ilib:test #{full_filename}"
end
end
end
Then
$>rake test:request
keywords: rails, rake, test, multiple, file
# Run specific tests for development
namespace :test do
desc "Run tests related to request."
task :request => :environment do
files = %w(
test/unit/action_request/suspension_request_test.rb
test/unit/action_request/request_base_test.rb
test/unit/partials/action_request_test.rb
)
files.each do |full_filename|
sh "ruby -Ilib:test #{full_filename}"
end
end
end
Then
$>rake test:request
keywords: rails, rake, test, multiple, file
Rails model use a table name different from its class name
class RequestBase < ActiveRecord::Base
self.table_name = "requests"
...
end
self.table_name = "requests"
...
end
Tuesday, August 21, 2012
Rails daily usage cheat sheet
Task | Command |
---|---|
Start server | rails s |
Start console | rails c |
Generate a db migration to add columns |
rails generate migration add_column_to_some_table column_name:string...
|
Generate a db migration to add index |
rails generate migration add_index_to_some_table
|
Generate new model |
rails generate model loan_charge product:references listener_reference:string
fixed_charge:integer min_charge:integer max_charge:integercharge_percentage:float
|
Friday, August 17, 2012
Configure Rails pages to use different layouts.
- Specify in controller. By default, the controller will use the same layout (under the view/layouts/ directory) having same name with the controller. If not found, /layouts/application.html.erb will be used.
- In controller set the :layout value. details refer to : http://guides.rubyonrails.org/layouts_and_rendering.html
Enable SQL stdout in Rails console
Edit or create .irbrc (or .pryrc if you're using pry) file in user home directory. Add the following lines:
if defined?(Rails) && !Rails.env.nil?
puts '... ActiveRecord and ActiveResource Logger set to STDOUT'
puts '... ActiveRecord and ActiveResource Logger set to STDOUT'
Execute SQL statement in Rails
rs = ActiveRecord::Base.connection.execute("select something from sometable")
value = rs[row_id_zero_based]['column_name']
Check if a function is called in Rails unit test
required gem: mocha
some_object.expects(:some_function_name).[expected calls]
where
[expected_calls]
- .once - be call once only
- .never - should never be called.
- .at_most
- ...
Storing array in Rails yml file
Example:
delivery_check_hour_at: [0, 6, 12]
*Note that a space is essential after the comma.
Rails startup sequence
boot.rb -> application.rb -> environment.rb -> /environment/*.rb -> /initializers/*.rb -> routes.rb
Running delayed job in Rails
Example:
Normal run: SomeModule.some_function()
Delayed job run: SomeModule.delay.some_function()
or specify when to run:
SomeModule.delay(:run_at => Time.now + 1.hours).some_function()
Yes, it's that simple!!
Return simple javascript from Rails controller without .js.erb file
In controller:
def some_function
respond_to do |format|
format.js { render :js => "alert('hello');" }
end
end
Create Temp file in Rails
f = Tempfile.new('some_name')
Simple AJAX on Rails
Basic ideas:
- On server side:
-
- In routes.rb, setup routings to handle ajax request calls.
- In controller, handle ajax request by something like: respond_to do |format| format.js end
- In views, create *.js.erb file which has the same name as the corresponding controller function.
- On client side:
- In view pages, use tags like <%= button_to , ..., :remote => true, ... %>
- Trigger the ajax request. E.g. by $('#ajax_submit').submit();
- Create a javascript like fn_some_callback() to handle callback from ajax response.
Run specify code in Rails after system initialization completed
For development environment, /config/environments/development.rb
config.after_initialize do
...
end
Run Rails console in sandbox
Any changes made will be rollback after exit.
$>rails c --sandbox
Execute external commands in Rails and capture the output
output = %x[./script/delayed_job status]
puts output
puts output
Customize a new rake task in rails
Refer to lib/task/*.rake
Arguments can be passed by using of ENV.
e.g.
desc "create csv of loan ledgers: LEDGERS_START_DATE=2011-07-01 LEDGERS_END_DATE=2011-12-31 rake adhoc:ledgers"
task :ledgers => :environment do
Adhocs.create_csv_of_loan_ledgers(ENV['LEDGERS_START_DATE'], ENV['LEDGERS_END_DATE'])
end
task :ledgers => :environment do
Adhocs.create_csv_of_loan_ledgers(ENV['LEDGERS_START_DATE'], ENV['LEDGERS_END_DATE'])
end
Subscribe to:
Posts (Atom)