Super Spread Sheet S³

Or little computing tricks and hacks

Category Archives: Sysadmin

Too many ssh authentication failures

Edit August 6, 2016

The solution I gave back in August 1, 2016 does not work. That rule will simply not offer any keys.

I found this post which refers to the pitfalls of ssh-agent, and it decribes that it is a problem having too many keys. It depends on the setting of the server to allow a specific number of keys offered before refusing access.

There is also a ssh argument force using a specific key.

Original post

In  my previous post Managing multiple ssh keys, I describe how to set up different ssh keys to different server and have the ssh command automatically discover which key to user for which (user, server) combination.

However, when I had a new server, if the key was not in the ~/.ssh/config file (se previous post), I would experience the follow:

ssh username@server
Received disconnect from server_ip port 22:2: Too many authentication failures
Connection to server closed by remote host.
Connection to server closed.

Looking around there were many explanations:

  1. That there is a limit on the number of keys offered to the server (true)
  2. That I ( could check what is happening by adding the -v flag to the ssh command (true)
  3. That I should add the line IdentitiesOnly yes to every definition in the ~/.ssh/config file (true but not enough)

I still had the problem.

The final solution was to add the line IdentitiesOnly yes to the /etc/ssh/ssh_config under Host *.

The enlightenment came from this post.






Adventures with Ubuntu in a MacBookPro9,2

My daughter came to me one day: “Mum, my Mac is kaput”. When upgrading, the computer just hang. After all the diagnosis possible, we figured it was a disc crash.

A mac techie acquaintance took the computer and after checking it up told us that he could replace the disc and … install Ubuntu! Yes please!

After getting the computer back I started playing with it, connect to the wifi, install this, download that, why is the Ubuntu version only 14.04? Let’s upgrade. After all we are only a few days from the next release 15.10. Hmm, there are errors. We need to reboot, yes, no, ahhh. Kernel panic…

… After some research I had some interesting findings.

There seems to be a tie between the Mac model and the Ubuntu release. This page shows the recommended Ubuntu release to the specific MacBookPro hardware model. They recommend the latest LTS when the user is not sure of the release to install. I was reticent to leave 14.04LTS, but looking at this wikipedia page, I was reassured that this particular version’s support runs until 2019-04! By then this Mac should be history!

To install according to the Mac’s model, first find out the hardware type by typing the following:

sudo dmidecode -s system-product-name

The output in my case:


And there is where I noticed that 15.04 was not going to work. So I proceeded to reinstall 14.04LTS from a usb stick and that was like a breeze, only after reading how to boot from a usb stick in a Mac:

Insert the Ubuntu LiveCD into your Mac and Shutdown. Restart the Mac and hold the Option Key. When the boot selector screen comes up, choose to boot from the CD.

The full installations instructions can be found here, but I just followed the section “Single-Boot: Ubuntu Only”.

All good except that the wireless card did not seem to be set up. But it was working before so it can be done. I did get scared when I clicked on the MacBookPro9-2/Utopic Unicorn link, and it read that wireless was not supported. But Utopic Unicorn is 14.10. And I have 14.04 Trusty Tahr.

Roughly these are the steps to follow to set up the wireless connection.

Identify the wireless chipset

This can be done in a couple of ways:

  • lspci | grep Network
  • lspci -vvnn | grep -A 9 Network

From the commands I learned that

  • The Chip ID is BCM4331,
  • The PCI-ID is 14e4:4331, and
  • Kernel driver in use is bcma-pci-bridge

Find the drivers for the chipset

This guide contains a full description of specific drivers supporting Broadcom BCM43xx Chipset. And there are a different instructions that one could follow. IN my case the chipset was supported by more that one driver but what worked for me was the section b43 – No Internet access:

  1. Install the b43-fwcutter package.
    cd /media/pool/main/b/b43-fwcutter/
    sudo dpkg -i b43-fwcutter* 
  2. Download the firmware file from here unto a computer with internet connection.
  3. Copy the file to your working directory (yes, using a usb stick). In a terminal use b43-fwcutter to extract and install the firmware:
    tar xfvj broadcom-wl-5.100.138.tar.bz2
    sudo b43-fwcutter -w /lib/firmware broadcom-wl-5.100.138/linux/wl_apsta.o
  4. Restart the computer or reload the b43 module by switching between drivers. I did the later.
    First unload all conflicting drivers (this includes removing the driver you’re trying to install):

    sudo modprobe -r b43 bcma
    sudo modprobe -r brcmsmac bcma

    Then load the driver to use:

    sudo modprobe b43

And by magic I now have a wireless connection, and life is good again!

Related links


When system upgrades break gem dependencies

After recently upgrading to Ubuntu 15.04 (almost ready for the next release!), my gem dependencies were broken.

... /lib/mysql2.rb:31:in `require': 
Incorrect MySQL client library version! 
This gem was compiled for 5.5.43 but the client library is 5.6.24. (RuntimeError)

Neither bundle nor bundle update changed the gem. Only uninstalling the gem forced the correct update:

gem uninstall mysql2
bundle install

Solution taken from

Restarting unicorn after boot using mina and nginx in Rails

My teammate did a wonderful job at setting up a droplet in Digital Ocean to host a Rails app. Most of the details are here.

All of the reference to directories are based on having exactly the set up described in that blog post.

However, we noticed that unicorn was not being restarted after booting, having to do so by hand. Not quite a problem if you are on developing/staging states, but in full production a real problem. Except, we could not use a simple command line, because unicorn has to be started by mina.

Looking at the config/deploy.rb file, I noticed the following:

    to :launch do
      invoke :'unicorn:restart'

so the command line to start unicorn should be:

mina unicorn:restart

Actually the command only works if in the correct directory: /home/deployer/my-app-name/current as follows:

bundle exec mina unicorn:restart

So now, all I had to do was to turn that command into a script called at boot. By the way, the droplet uses Ubuntu.

After a few attempts, I created a file called /etc/init.d/my_unicorn, you have to have root privileges for this, with the following format:

#!/bin/sh -e
# upstart-job

echo "starting unicorn after reboot"
exec sudo -u deployer sh -c "cd /home/deployer/my-app-name/current && /home/deployer/.rbenv/shims/bundle exec mina unicorn:restart"

(Looking back “-u deployer” might not be needed but I did not test it.)

By just adding that script, I was able to run

$sudo service my_unicorn start

But, it was not being called at boot yet. I needed to add new service (i.e. my_unicorn) to startup. For that, the following command is needed:

sudo update-rc.d my_unicorn defaults

And that should have worked on reboot, but it wasn’t.

Every time mina is called, it asks for authentication, and after 3 failed attempts it quits with an error. The solution was to create a ssh key and add it to the authorized_keys file, the same way you do for sites like github.

And that is it, if I haven’t forgotten anything!

Managing multiple ssh keys

I know. This post has been written many times. However, this one has my own flavor. This post assumes that the reader knows how to use the ssh protocol and to create ssh keys. If in doubt, visit the github instructions here.

The ssh protocol uses the ssh-agent program defined as follows:

ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA, ECDSA, ED25519). The idea is that ssh-agent is started in the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh.

When there is only one ssh-key, the ssh-agent loads it automatically it seems (I need to investigate further as I seem to be running polkit-gnome-authentication-agent instead of ssh-agent).

Start by identifying how many keys you need, depending on the sites you usually connect to. In my case that is gtihub, heroku, bitbucket, computers in my local network, and a remote computer. Remove the current keys located in the ~/.ssh directory, which names have patterns like id_{rsa,dsa}*. As I tend to be paranoiac, I put them in a directory called original in case I needed to do a rollback.


The next step is to create the key, by using the command:

ssh-keygen  -f ~/.ssh/id_rsa.github -C ""

I tend to omit the paraphrase by just typing enter when prompted.

This created two files: id_rsa.github, The latter is the actual key to copy on the github account settings part.

The next to steps are new:
First add the key to ssh-agent:

ssh-add ~/.ssh/id_rsa.github

Second add the specification of the site to the ~/.ssh/config file:

Host github
User bluciam
IdentityFile ~/.ssh/id_rsa.github

You can check if the connection is working by issuing the command

ssh -T

which, if successful, will respond with

Hi bluciam! You’ve successfully authenticated, but GitHub does not provide shell access.

bluciam is my github username. To get all the information on the handshake, add a -v:

ssh -Tv

Adding the other ssh keys follows the same process, obviously replacing with the correct names and hosts.


Heroku has also a page with full instructions here. There are two commands I would like to highligh:

1. To check if the connection is working issue the command

ssh -v

2. To add the key without logging into the site:

heroku keys:add

Local machines

For my local machines, the added local instead of the name of the server. The adding to the ssh-agent is done once

ssh-add ~/.ssh/id_rsa.local

but there must be an entry for each machine in the config file.

And that is all!

Further reading:

Create admin user in Rails

For obvious reasons, any way to create an admin user through the web interface should be forbidden: exclude the admin field in whatever form you have decided to implement it, from the params hash.

One of the most secured ways to create admin users is to use the seed.rb file. The file could look something like this:

users = {
    admin: {
        username: 'admin',
        email: '',
        password: 'adminpass',
        password_confirmation: 'adminpass',
        is_admin: true
    administrator: {
        username: 'administrator',
        email: '',
        password: 'administrator',
        password_confirmation: 'administrator',
        is_admin: true

users.each do |user, data|
  user =
  unless User.where(email:!

Taken from verbatim here.

Once the file is created, all you have to run is

$ rake db:seed

This file will also be sourced when running

$ rake db:setup

Make sure that the password is changed as soon as the admin user is created. You can also force an admin password reset.

If using heroku for deployment this is the command to seed the database:

$ heroku run rake db:seed

Deploying Rails in Heroku using AWS S3 to store carrierwave files

I am developing an app which requires users to upload pictures on updates. Heroku allows only for transient pictures, staying alive only minutes or seconds in its temporary storage. The solution was to store all the pictures in the cloud.

For this I used AWS. The steps:

  1. Create an account in AWS, which for the first year is free for the first year. The verification process is lengthy and you need a phone as you will receive an automatic call.
  2. Create a bucket, which in Linux terms is a directory. You can do this by going to services -> S3 and there you should have an option for creating a bucket.
  3. Create a IAM user. This is very important to allow you to manage access to your account giving specific permissions. Grab the credentials right then and put them in a safe place. You will not have access to them again, you need to recreate to see them.
  4. To give access privileges to that user, it seems that you have to create a IAM group and grant privileges to that group. Then add the user to the group. There might be a way to just grant access to the user but having a group is the suggested way.
  5. That is all from form the AWS side.
  6. Instead of saving the keys in a file, which you risk adding to the git repository exposing the keys, heroku suggests adding them as environment variables. That is achieved by following the instructions in the link, and breifly it looks like this:


    $ heroku config:set S3_KEY=THATVERYLONGSTRING
    Adding config vars and restarting app... done, v12
    $ heroku config
    (and any other environment variables that might be set)
    $ heroku config:get S3_KEY
    $ heroku config:unset S3_KEY
    (When you don't need it anymore)
  7. If running on development at the same time, do set the environment variables locally as well.
  8. Add the gems to the Gemfile:
    gem 'fog'
    gem 'fog-aws'
  9. I am not sure if both are needed, but I started with fog-aws and was getting errors of initialized variable. Once I added fog, there were no problems. The problem might had been that I am using an older version of carrierwave.
  10. Update the config/initializers/carrierwave.rb and each of the image uploaders. I used the information here and here.


    # config/initializers/carrierwave.rb
    CarrierWave.configure do |config|
      config.fog_credentials = {
        :provider              => 'AWS',
        :aws_access_key_id     => ENV['S3_KEY'],
        :aws_secret_access_key => ENV['S3_SECRET']
      if Rails.env.test? || Rails.env.cucumber? = :file
        config.enable_processing = false
        config.root = "#{Rails.root}/tmp"
      else = :fog
      config.cache_dir = "#{Rails.root}/tmp/uploads"
      config.fog_directory = ENV['S3_BUCKET_NAME']
    class ImageAuthorUploader < CarrierWave::Uploader::Base
      storage :fog
      def store_dir
  11. And that should do it! It did for me.

A related post and video presentation on the subject by Nicholas Henry can be found here.

Physical versus core processors

I finally found a description of the difference between the physical processor and its cores.

The physical processor in the modern computer comes nowadays with two or four processor cores, respectively called dual or quad-core. These cores are like virtual processors and can handle instructions as if they were standalone.

In the system description of the computer, a single physical processor may appear as two (for dual-core) or four processors (for quad-core), as the system might describe virtual processors and not actual sockets.

To query for the number of processors and cores, in ubuntu use the following command:
cat /proc/cpuinfo
Doing a grep on “processor” will summarize the number of cores.

(The info on this post was all taken from here.)

Samba and Lubuntu

Following the set up described in this post, I set out to do the same in the EeePcs with Lubuntu running on them.

Installing samba alone is not enough to turn the Scans folder into a Share folder. According to Ubuntu forum question, other software has to be installed. The full list of the packages is:

* samba
* system-config-samba
* gvfs-bin
* gvfs-backends

I used sudo apt-get install package-name

As opposed to what is said in the previous post, Samba has to be called from the Menu -> System Tools -> Samba. This will ask for the root password, and open up the Samba Server Configuration. The Scans folder should exists already. Click on the + to add the folder. This will open up a window with with two tabs. The “Basic” tab has fields to fill in and the values I entered are:

  • Directory: /home/username/Scans
  • Share name: Scans
  • Description: Remote scanning

Also I clicked on both Writable and Visible:


In second tab “Access”, I clicked “Allow access to everybody”.


After pressing OK, I also had to manually change the permission of the Scan directory to all read and write:
chmod 777 Scans

After that the directory is ready to be added in the printer’s web page, as described here.

New firmware: Android 4.3

And just as you thought that Android and Samsung were making a great team, you get the new firmware update where everything seems upside down. Yes, the fonts look better, is faster with some features, but others are hidden never to be seen by a normal user.

I had notice messages in the lock screen, a message stating how to unlock (swipe your pattern) and an alert with “If found please contact me here”. But these were making the unlocking somewhat unresponsive, so I wanted to get rid of them completely. Good luck!

One particular feature was disabled in the current “home screen mode”, which I had always used: the simple “easy mode” . To have access to the “Lock screen widgets” where I can manage some of these messages, I had to use the “standard mode” which is overloaded in my opinion. OK, so after a lot of browsing (check here for more information), I solved one issue. The next issue was a rolling message in that same lock screen. The problem is that I can’t remember how I set it up… I have not found access to that feature yet.