Super Spread Sheet S³

Or little computing tricks and hacks

Category Archives: git

git: comparing with remote branches

I have a Rails app which is deployed in Heroku and its source is in bitbucket. In Heroku I have in fact two instances: staging and production. When I came back after a break on the project, I wanted to compare what was deployed or committed where, as I knew that a JavaScript bug had prevented me to have a full deployment.

git works with branches so the comparison I want to make, takes place between branches, regardless of where they are located.

To lists local and remote branches, run the command:

git branch -a

The output for my app looks something like this:


The first line is the local working branch, the second in the remote master branch in bitbucket, the last two lines are the production and staging branches in heroku.

I can simply run git diff with the name of the two branches to get a detail description, line by line, of the differences:

git diff remotes/heroku/master remotes/staging/master

To get the list of only the files that are different, use the following command:

git diff --stat --color master remotes/heroku/master


Managing multiple ssh keys

I know. This post has been written many times. However, this one has my own flavor. This post assumes that the reader knows how to use the ssh protocol and to create ssh keys. If in doubt, visit the github instructions here.

The ssh protocol uses the ssh-agent program defined as follows:

ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA, ECDSA, ED25519). The idea is that ssh-agent is started in the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh.

When there is only one ssh-key, the ssh-agent loads it automatically it seems (I need to investigate further as I seem to be running polkit-gnome-authentication-agent instead of ssh-agent).

Start by identifying how many keys you need, depending on the sites you usually connect to. In my case that is gtihub, heroku, bitbucket, computers in my local network, and a remote computer. Remove the current keys located in the ~/.ssh directory, which names have patterns like id_{rsa,dsa}*. As I tend to be paranoiac, I put them in a directory called original in case I needed to do a rollback.


The next step is to create the key, by using the command:

ssh-keygen  -f ~/.ssh/id_rsa.github -C ""

I tend to omit the paraphrase by just typing enter when prompted.

This created two files: id_rsa.github, The latter is the actual key to copy on the github account settings part.

The next to steps are new:
First add the key to ssh-agent:

ssh-add ~/.ssh/id_rsa.github

Second add the specification of the site to the ~/.ssh/config file:

Host github
User bluciam
IdentityFile ~/.ssh/id_rsa.github

You can check if the connection is working by issuing the command

ssh -T

which, if successful, will respond with

Hi bluciam! You’ve successfully authenticated, but GitHub does not provide shell access.

bluciam is my github username. To get all the information on the handshake, add a -v:

ssh -Tv

Adding the other ssh keys follows the same process, obviously replacing with the correct names and hosts.


Heroku has also a page with full instructions here. There are two commands I would like to highligh:

1. To check if the connection is working issue the command

ssh -v

2. To add the key without logging into the site:

heroku keys:add

Local machines

For my local machines, the added local instead of the name of the server. The adding to the ssh-agent is done once

ssh-add ~/.ssh/id_rsa.local

but there must be an entry for each machine in the config file.

And that is all!

Further reading:

Git: looking for old versions of files

In the past two days I have been doing massive changes to my app along with some clean up. In the clean up however, I have changed chunks of code which I needed to refer to while creating other features.

A quick way to check for the version of the file from, say, the beginning of the year using git is:

git show HEAD@{2015-01-01}:./paht/to/file/file.rb

This was taken from Stack overflow. There is another solution using gitk taken from the same link:

1) start gitk with:

gitk /path/to/file

2) Choose the revision in the top part of the window either by description or by date. By default, the lower part of the screen shows the diff for that revision, (corresponding to the “patch” radio button).

3) To see the file for the selected revision:

Click on the “tree” radio button. This will show the root of the file tree at that revision.
Navigate down to your file.

Git: changes in the wrong branch

As I was happily following instructions to use the puma gem in heroku, I noticed I was on the wrong branch… &*$%= (or word of your choice).

Searches told me to simply go back to my branch (from refactor_home to master) and changes would follow:

git checkout master

but instead I got this mistake:

error: Your local changes to the following files would be overwritten by checkout:
Please, commit your changes or stash them before you can switch branches.

Uhmm. Reading the error message, commit was out of the question. There was “stash” left. git stash saves the changes somewhere. I ran

git stash

and the output was

Saved working directory and index state WIP on refactor_home: 8158613 Refactor finished with fallback for articles only

Then I checked out to the master branch, and ran

git stash pop

and presto. Now pending changes are part of the master branch.

This only worked because both branches were clean. I do not want to adventure what would happen if there were a lot of other changes.

If you want to throw away changes (taken from here):

git reset --hard


git reset --hard HEAD

This post explains what to do if the changes are already committed.

Git 2.0

The behaviour of the version 2.0 of git will changed for git push/pull.

Before, the default was matching:

matching – push all branches having the same name in both
ends. This is for those who prepare all the branches into a
publishable shape and then push them out with a single command.
It is not appropriate for pushing into a repository shared by
multiple users, since locally stalled branches will attempt a
non-fast forward push if other users updated the branch.

After 2.0, the new default is simple:

simple – like upstream, but refuses to push if the upstream
branch’s name is different from the local one. This is the
safest option and is well-suited for beginners.

Before but close to version 2.0. the user will get a message about this issue everytime that git pull/push is ran:

$ git push
warning: push.default is unset; its implicit value is changing in
Git 2.0 from 'matching' to 'simple'. To squelch this message
and maintain the current behavior after the default changes, use:

  git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

  git config --global push.default simple

See 'git help config' and search for 'push.default' for further information.
(the 'simple' mode was introduced in Git 1.7.11. Use the similar mode
'current' instead of 'simple' if you sometimes use older versions of Git)

Everything up-to-date

So as the message states and for the sake of repetition, to keep the pre 2.0 behaviour use:
git config --global push.default matching

To adopt the new behaviour use:
git config --global push.default simple

The latter is recommended when the repository is shared and for beginners.

Bare gits

Everytime I do this I have to go to my notes or the Web… It’s time to put it in one place.

I want to create a new repository which is the common or reference point of our data. It can be seen as a back up or just the sanity check. In other words, I want to change where the pushing and pulling is directed. So the remote master origin.

This time the reference point is a zip drive, which should be connected always to the same computer otherwise the URL to reach the gits cannot be fixed. git push and git pull will respectively update the bare repos with local changes and get the latest changes, provided the correct URL is in the configuration file.

There are two ways of the doing the same thing. Choose the one you understand well:

  1. Here you need to clone in the directoty where you are storing the bare repository. Manually, though, you have to change the config file in the .git directory so that the push and pull are automaticly done to this bare repository:
    • Create the directory where all the gits are to be stored, assuming there are more than one.
    • cd to each directory in turn
    • git --bare init
    • In the original directory, after updating the config file to have the bare as the origin and master, push the tree: git push
  2. With only the following command, git creates a bare repository while clonig, assuming you are in the working tree of the repository you want to create a bare copy:
    • git clone --bare full-path-of-the-repository (optional)-name-of-repository
    • git push

To update the config file I commonly use the following method:

  • Manually changing the file with a text editor. You must know what you are doing here.

Another method is by using the git remote command. It has an issue though, that it will also modify the master branch and then when pulling, one must specify the branch at each go.

  1. git remote rename origin new-name
  2. git remote add origin the-url-of-the-bare-repository

I will investigate later and post the workaround to this issue.

Backing up git repositories and cron

All of our important files and ongoing work is managed through git repositories. This facilitates immensely sharing and updating among us. Ideally one would have a bare (or central) repository which is a git repository that does not have a working tree. This repository allows users to «push» their changes into it. If this bare repository is understood as central for all users, depositing (or pushing) and retrieving (or pulling) from it, guarantees the propagation of changes among everybody and, very important keeps updated copies in different locations.

But sometimes, one forgets to commit the latest changes or to pull on a regular basis. (For the more information in git, bare repository, commit, push and pull got to here or here.) I created some scripts to run automatically using cron, for this purpose. The workflow we want is:

  1. pull from the bare repository
  2. commit your changes locally
  3. push your changes to the bare respository

Depending on the nature of your work or file structure, you might want to pull and push only on selected working trees (really, directories). I created a script for each of these two functions, each reading from a list of selected directories.

After having tested the scripts, I set up the cron jobs to run in the middle of the night, as to ensure that I’m not working on a tree at that moment. This is done by running the command crontab -e and simply adding at the end of the file, the command to run, following the syntax described in that file:

minute hour dayOfMonth month dayOfWeek command

To run the command everyday, replace dayOfMonth , month and dayOfWeek by an asterix. For example:

2 2 * * * /full/path/to/commit_and_pull_script 2> /full/path/to/error_log_file

will run a script which commits the changes and then pulls from the repository at 2:02 in the morning, logging any errors in the error_log_file. If that file does not exist and there is an error, you might get the following in the syslog file:

(CRON) info (No MTA installed, discarding output)

Meaning that cron tried to mail the error, but there is no Mail Transfer Agent set up in the systemi, so cron could not send the email, and the error log is lost.

Do not run the command crontab alone to look or change you cron table, as it will wipe it out. No back-up! The actual file is stored at /var/spool/cron/crontabs/username. So as a precaution, a put I copy of that file in my home directory.

Log file

I keep logs for the output of the scripts to know what was done and if there were any errors. To clean up the logs periodically, I use logrotate, again as a scheduled task. I simulate what is done at the linux system level: I create the directories var and etc locally in my home directory. Under var, I create lib, where the status file of logrotate is store, which is created by the process itself (logrotate.status); and log, where all the logs from the scripts are directed to. The configuration file for logrotate (logrotate.conf) is stored under etc and it is created by the user. This files specifies which logs are to be rotated and how often. The format is simple and you can find examples using man.

To schedule the job:

1 1 * * * /usr/sbin/logrotate -s /full/path/logrotate.status /full/path/logrotate.conf > /full/path/logrotate.log 2> /full/path/logrotate.err

Will run logrotate at 1:01 every morning, looking into logrotate.status, using the configuration in logrotate.conf saving a log in logrotate.log with errors to logrotate.err. Make sure that all of the directories exist prior to running the commands.