git: comparing with remote branches

I have a Rails app which is deployed in Heroku and its source is in bitbucket. In Heroku I have in fact two instances: staging and production. When I came back after a break on the project, I wanted to compare what was deployed or committed where, as I knew that a JavaScript bug had prevented me to have a full deployment.

git works with branches so the comparison I want to make, takes place between branches, regardless of where they are located.

To lists local and remote branches, run the command:

git branch -a

The output for my app looks something like this:

  ...
  master
  remotes/origin/master
  remotes/heroku/master
  remotes/staging/master
  ...

The first line is the local working branch, the second in the remote master branch in bitbucket, the last two lines are the production and staging branches in heroku.

I can simply run git diff with the name of the two branches to get a detail description, line by line, of the differences:

git diff remotes/heroku/master remotes/staging/master

To get the list of only the files that are different, use the following command:

git diff --stat --color master remotes/heroku/master

Sources:

Dear App, why are you giving me the wrong date?

Localization has always amazed me. But I also giggle when developers get it wrong. I know for experience that it is not obvious. How does the app know what language do you speak if you leave in Montreal, Switzerland or Belgium, where there are more than one official language? And that is when when the user wants to use the app in one of the official languages.

Added to that are the timezones and daylight savings shifts in one place throughout the year. I am originally from Colombia, where even the notion of seasons is summarized as the rain and dry seasons. Don’t ask me when each is supposed to be! Change of time in Summer? Forget it. “Wow, are you really going to sleep now, aunty, here we are going to have lunch!” The internet brings these issues to the fore and forces the need for solutions. The infrastructure is there to be used.

The last time I stumbled upon timezones was when implementing the events section of my Rails app. The first surprise was to find that the time I saved was shifted. The reason made perfect sense. The canonical storage timezone in the database is UTC and the display can be done in a timezone of your choice. Cool. I adjusted the configuration and added to the config/application.rb file the following line:

    config.time_zone = 'Eastern Time (US & Canada)'

That worked just fine in my local development platform: the storage was done in UTC and the display in EST/EDT depending on the time of the year, with the following code:

    event.my_date.to_formatted_s(:long_ordinal)
    event.my_date.zone %>

But when I deployed to heroku, the initial problem came back. I realized that it had something to do with the local timezone of the machine, somehow. As would only make sense, the heroku server is running in UTC time. However, even if the default timezone read EST, the actual saving into the database was done simply by striping the timezone: 17h00 EST became 17h00 UTC.

I could not find a reason, but that does not matter. The point is to make the app work standalone, independently of where it is deployed.

I tried several paths:

1. I added an extra line to the config/application.rb as follows:

    config.active_record.default_timezone = 'Eastern Time (US & Canada)'

However, that line just made the previously saved my_date fields disappear.

2. I then used the use_zone command to create a block, where the default timezone would be defined:

Time.use_zone("Eastern Time (US \& Canada)") {
        time_entered = Time.new(
          params[:date]['year'].to_i,
          params[:date]['month'].to_i,
          params[:date]['day'].to_i,
          params[:date]['hour'].to_i,
          params[:date]['minute'].to_i,
        )
        @event.the_date = time_entered
      }

This was a step forward, the time_entered was produced in the correct timezone but the assignment and then saving in the database, simply stripped off the timezone information.

3. After much trial and error, the solution was to create an object with the UTC offset. This offset had to be dependent on the entered date, as daylight savings for that date is what matters.

The final solution is:

      Time.use_zone("Eastern Time (US \& Canada)") {
        time_entered = Time.new(
          params[:date]['year'].to_i,
          params[:date]['month'].to_i,
          params[:date]['day'].to_i,
          params[:date]['hour'].to_i,
          params[:date]['minute'].to_i,
        )
        the_offset = Time.zone.parse(time_entered.to_s).utc_offset
        new_time = Time.new(
          params[:date]['year'].to_i, 
          params[:date]['month'].to_i,
          params[:date]['day'].to_i,
          params[:date]['hour'].to_i,
          params[:date]['minute'].to_i,
          0,
          the_offset
        )
        @event.my_date = new_time
      }

The code for viewing the date, did not change.

If you have any further comments or explanations, please send them this way.

Sources

Rails: Different storage for carrierwave assets depending on the environment

I am using carrierwave for storing images, one (or more) per field of a table in a database of the my Rails app. Given that heroku does not allow assets to be uploaded by the user, or dynamically for that matter, I had to use storage in the cloud. Mind you, not a bad thing. I chose AWS, s3 services to store my bucket.

Carrierwave has an extensive explanation on setting up the right configuration variables to allow the storage on the cloud. (Carrierwave in github)

The first problem I had was running the development and staging (or production) sites at the same time: Updates in one were seen in the other in the following case:

  1. Create a field with id=1 in production. The image associated with it, is stored in the cloud with id=1.
  2. Create a field with id=1 in development. the image associated with it, overrides in the cloud, the previously created image.

My solution to solve this problem was to create a second bucket, and add in the file

config/initialize/carrierwave.rb

the setting as follows:

# config/initialize/carrierwave.rb
...
  if Rails.env.production?
    config.fog_directory = 'bucket-0'
  elsif Rails.env.development?
    config.fog_directory = 'bucket-1'
  end

The next problem was that the test suite run was affecting the development data in the same way as described before. In fact, I wanted the images for the test suites simply to be stored locally and then deleted. The first approach was to include in the same file as above the following:

# config/initialize/carrierwave.rb
...
  if Rails.env.test? || Rails.env.cucumber?
    config.storage = :file
    config.enable_processing = false
    config.root = "#{Rails.root}/tmp"
  else
    config.storage = :fog
  end

  config.cache_dir = "#{Rails.root}/tmp/uploads"

And to delete after finishing the test, add to the file:

spec/spec_helper.rb

the following:

# spec/spec_helper.rb
RSpec.configure do |config|
  config.after(:all) do
    if Rails.env.test?
      FileUtils.rm_rf(Dir["#{Rails.root}/tmp/uploads"])
    end
  end
end

That only seemed to work. I think that the display was using cache data. Only recently did I notice the problem when reworking some aspects of the app. The images in development were being overwritten by the test suite.

I had to add the following to the uploader file

# app/uploaders/image_uploader.rb
...
  if Rails.env.test?
    storage :file
  else
    storage :fog
  end

Et voilá!

Managing multiple ssh keys

I know. This post has been written many times. However, this one has my own flavor. This post assumes that the reader knows how to use the ssh protocol and to create ssh keys. If in doubt, visit the github instructions here.

The ssh protocol uses the ssh-agent program defined as follows:

ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA, ECDSA, ED25519). The idea is that ssh-agent is started in the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh.

When there is only one ssh-key, the ssh-agent loads it automatically it seems (I need to investigate further as I seem to be running polkit-gnome-authentication-agent instead of ssh-agent).

Start by identifying how many keys you need, depending on the sites you usually connect to. In my case that is gtihub, heroku, bitbucket, computers in my local network, and a remote computer. Remove the current keys located in the ~/.ssh directory, which names have patterns like id_{rsa,dsa}*. As I tend to be paranoiac, I put them in a directory called original in case I needed to do a rollback.

Github

The next step is to create the key, by using the command:

ssh-keygen  -f ~/.ssh/id_rsa.github -C "myemail@example.com"

I tend to omit the paraphrase by just typing enter when prompted.

This created two files: id_rsa.github, id_rsa.github.pub. The latter is the actual key to copy on the github account settings part.

The next to steps are new:
First add the key to ssh-agent:

ssh-add ~/.ssh/id_rsa.github

Second add the specification of the site to the ~/.ssh/config file:

Host github
Hostname github.com
User bluciam
IdentityFile ~/.ssh/id_rsa.github

You can check if the connection is working by issuing the command

ssh -T git@github.com

which, if successful, will respond with

Hi bluciam! You’ve successfully authenticated, but GitHub does not provide shell access.

bluciam is my github username. To get all the information on the handshake, add a -v:

ssh -Tv git@github.com

Adding the other ssh keys follows the same process, obviously replacing with the correct names and hosts.

Heroku

Heroku has also a page with full instructions here. There are two commands I would like to highligh:

1. To check if the connection is working issue the command

ssh -v git@heroku.com

2. To add the key without logging into the site:

heroku keys:add

Local machines

For my local machines, the added local instead of the name of the server. The adding to the ssh-agent is done once

ssh-add ~/.ssh/id_rsa.local

but there must be an entry for each machine in the config file.

And that is all!

Further reading: https://gist.github.com/jexchan/2351996/

Ckeditor in heroku

I was reaaaally happy to have found the ckeditor gem. As my grandmothers used to say: “Mató dos pájaros de un tiro” (killed two birds with one shot). The gem took charge of all the type setting of the articles INCLUDING adding pictures to the body of the text.

ckeditor-1

Everything worked like magic in my test environment, until I deployed to heroku. All charms gone, it did not work. Where the editing window was to go, it was just nothingness.

The solution taken from here:

Make sure to

bundle update ckeditor

and then, add these line to config/application.rb

config.assets.precompile += Ckeditor.assets
config.assets.precompile += %w( ckeditor/* )
config.autoload_paths += %W(#{config.root}/app/models/ckeditor)

and that worked for me.

Deploying Rails in Heroku using AWS S3 to store carrierwave files

I am developing an app which requires users to upload pictures on updates. Heroku allows only for transient pictures, staying alive only minutes or seconds in its temporary storage. The solution was to store all the pictures in the cloud.

For this I used AWS. The steps:

  1. Create an account in AWS, which for the first year is free for the first year. The verification process is lengthy and you need a phone as you will receive an automatic call.
  2. Create a bucket, which in Linux terms is a directory. You can do this by going to services -> S3 and there you should have an option for creating a bucket.
  3. Create a IAM user. This is very important to allow you to manage access to your account giving specific permissions. Grab the credentials right then and put them in a safe place. You will not have access to them again, you need to recreate to see them.
  4. To give access privileges to that user, it seems that you have to create a IAM group and grant privileges to that group. Then add the user to the group. There might be a way to just grant access to the user but having a group is the suggested way.
  5. That is all from form the AWS side.
  6. Instead of saving the keys in a file, which you risk adding to the git repository exposing the keys, heroku suggests adding them as environment variables. That is achieved by following the instructions in the link, and breifly it looks like this:

     

    $ heroku config:set S3_KEY=THATVERYLONGSTRING
    Adding config vars and restarting app... done, v12
    S3_KEY: THATVERYLONGSTRING
    
    $ heroku config
    S3_KEY: THATVERYLONGSTRING
    (and any other environment variables that might be set)
    
    $ heroku config:get S3_KEY
    THATVERYLONGSTRING
    
    $ heroku config:unset S3_KEY
    (When you don't need it anymore)
    
  7. If running on development at the same time, do set the environment variables locally as well.
  8. Add the gems to the Gemfile:
    gem 'fog'
    gem 'fog-aws'
  9. I am not sure if both are needed, but I started with fog-aws and was getting errors of initialized variable. Once I added fog, there were no problems. The problem might had been that I am using an older version of carrierwave.
  10. Update the config/initializers/carrierwave.rb and each of the image uploaders. I used the information here and here.

     

    # config/initializers/carrierwave.rb
    
    CarrierWave.configure do |config|
      config.fog_credentials = {
        :provider              => 'AWS',
        :aws_access_key_id     => ENV['S3_KEY'],
        :aws_secret_access_key => ENV['S3_SECRET']
      }
    
      if Rails.env.test? || Rails.env.cucumber?
        config.storage = :file
        config.enable_processing = false
        config.root = "#{Rails.root}/tmp"
      else
        config.storage = :fog
      end
    
      config.cache_dir = "#{Rails.root}/tmp/uploads"
    
      config.fog_directory = ENV['S3_BUCKET_NAME']
    end
    
    #app/uploaders/image_uploader.rb
    
    class ImageAuthorUploader < CarrierWave::Uploader::Base
      storage :fog
      def store_dir
        "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
      end
    end
    
  11. And that should do it! It did for me.

A related post and video presentation on the subject by Nicholas Henry can be found here.