Pyenv install using shared library

A photo posted by Michele Mattioni (@mattions) on

Random Nice Picture not related with the post. You’re welcome ūüôā

I used to have only virtualenvs. Then I moved only to use conda. Then I was on the position that I had to use either one or the other one, and I have happily switched to use pyenv as a way to manage both conda and virtualenv python enviroments. You can always pick both interpreter version Python 2.7 or 3.4.

I have just noticed that my ipython notebook couldn’t acccess the shared sqlite and readline libraries, which is bad, ’cause my history was not saved, and the readline support makes everything a little bit more enjoyable.

After 2 minutes of googling, I found the solution:

$ env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 2.7.10
$ pyenv global 2.7.10

and you are sorted.

I have found the solution on stackoverflow.

How to get WebEx running on an Ubuntu smoothly

Being 2016, there are a lot of ways to get on a video link between people.

While Skype, Viber, Whatsup or anything else can open a video connection and can be used between friends, in business world the options are a little bit more limited.

One of the option that is on par with the time is google hangout, and if your company has google apps, you can set up nice meeting, directly attached to your calendar invitation. It’s awesome and I like it a lot. My choice.

However, in biz space old habits are hard to die, therefore people stick to things like gotomeeting, which is not too bad, or the worse thing ever supported on Linux, WebEx.

To run the WebEx on Linux is a nightmare, to put it mildly. The WebEx is a java application, but they made sure that you can only run the 32 bit, and you can launch the applet only using a Firefox 32 bit installation. As I said, they may have their own reasons, but honestly I don’t really get it, and I think it is super crazy.

After battling with it for at least 4 hours, I found a reproducible way to get it going.

Here are the steps:

1) Install Firefox an 32 bit and some libraries for nice appearance.

sudo apt-get install firefox:i386 libcanberra-gtk-module:i386 gtk2-engines-murrine:i386 libxtst6:i386

2) Download the jre from oracle:

This is the link where you can pick the jre. Get the tar package, not the rpm http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html

3) Create a dedicated dir and untar it there

mkdir ~/32bit
cd ~/32bit
tar xvf ~/Downloads/jre-8u73-linux-i586.tar.gz

4) Add the plugin to firefox

mkdir ~/.mozilla/plugins

5) Link the plugin

ln -vs ~/32bit/jre1.8.0_73/lib/i386/libnpjp2.so ~/.mozilla/plugins/

Now you are all set!

In my test I was able to share the screen, to use the audio from the computer, and everything was working ok.

Good luck, and honestly, if you can, avoid WebEx.

Apollo, a laptop with Linux pre-installed from Entroware

Apollo running latest Ubuntu

Apollo running latest Ubuntu

TL;DR: Get it.

I’m a Linux user for a long time, which is so long that I still know the difference between GNU/Linux and Linux.

My first distribution, jut right off the bat, was a Gentoo, where I was compiling kernels and everything else to have a working system. Having very few idea of what I was doing.

This usually meant having to fight with drivers, search for solutions on forums (sometimes very hidden once) and be a quite technical person.

I have to say I have learned a lot during this time, and actually Linux made me interested again in computer and informatics in general.

cautionary

Long time ago, in far far away galaxy recompiling the kernel was normal, even if you didn’t know what the kernel was!

Time passed and several revolutions do have happened; first, the Ubuntu distributions was founded, and I think it really helped to bring Linux closer to the masses. Of course one of the good ideas was to use Debian as base, however I think the time spent in bringing a coherent user interface towards the general public was what Ubuntu was striving for. The bug number one was closed long time ago, mostly due to the increase of portable computing, and the part that Android has played into it, although I still think that Ubuntu has played a big part.

Second, a big shift on the laptop environment also materialized. Dell was one of the first big retail name to provide a Linux solution laptop, and in particular the XPS 13 which was always a good computer. ¬†Dell was offering to have Ubuntu pre-installed directly from the factory, and that was the choice I made. I had the old XPS, and I had a good experience with it. which meant no license Windows fee. I’ve got one. The motherboard suffered quite a bit of hiccups, but all in all the laptop did its job valiantly.

The new XPS 13 inch looks pretty good, and the project Sputnik¬†is entirely devoted to make sure this laptop runs Ubuntu or other distributions properly. While the XPS 13 is a terrific laptop, two main problems didn’t let me pick it. First: the position of the video camera. I get the small bezel idea and stacking a 13 inches display¬†in what usually it’s a body of an 11 inches is great for portability, however the angle of the camera is desperately bad.

Basically, if you have a video call with somebody, they see the internal of your nose, instead of your fact. If video call are a day to day experience for you, it cannot be an option.

The second problem is the screen. While the colours are amazingly brilliant and the resolution so high that you need a lens to see the icons, the major problem is that in most of the high level configurations you have only the glossy option available

A glossy display reflects all the lights, so even a little sun-ray that hits that screen will turn it into a mirror, with the clear result of decreasing the usage of it. Basically you can’t see what is going on. And that is bad.

Xkcd, laptop hell

With latpots, either you don’t care, or you get extremely opinionated.

So that brings me to the Apollo by Entroware.

Apollo 13 inch, sleek and nice.

Apollo 13 inches, sleek and nice.

A very nice in depth review has been done by the Crocoduck here, so I suggest to visit it there. Here I’m gonna put my general impressions.

Apollo laptop impressions

When you power up the laptop via the dedicated power button which is integrated in the keyboard, you are greeted by the Ubuntu installer. Partitions and most of the stuff is already done for you, what you’ve got to do is to pick is the username, and the timezone.

After that you are greeted by a standard Ubuntu installation. Everything goes out of the box, in particular:

  • Wifi can just be used via Network Manager
  • USB port are working: I have even bought a USB 3.0 to ethernet dongle, and it was just plug and play
  • All the Fn keys (backlit keyboard, Bluetooth, screen brightness and so forth) do work.
  • Suspend works out of the box, without any problems. I’ve noticed that the wifi sometimes does not get back properly, but it easily fixed restarting network-manager :¬†systemctl restart network-manager.service

Specs: i7 CPU on skylake bridge, 500 Gb SSD, 8 Gb RAM, with a 1920×1080 display (which I run @ 1600×900 to actually have a bigger text), weighting a bit less than 1.5 Kg. Everything for ¬£824, which is a honest prize I think.

The keyboard is very comfortable and nice. The keys do have a nice feeling and it’s not tiring to write on it. The touchpad is ok, the tap works great and the sensibility looks good. Clicking is doable, however it’s one of this integrated touchpad, so it will never be as good as a normal touchpad with physical buttons.

So if you are in the market for a sleek portable laptop running linux, I totally suggest to check the Entroware laptops.

 

Pills of FOG

 

We’re out of the FOG! The Festival of Genomics has just concluded yesterday and it was a blast.

I was at the SevenBridges booth, showing researchers how it is possible to actually create reproducible pipelines using open technologies, like the CWL and Docker.

The festival was very good, I totally suggest you to read the post by Nick on the company blog (day1, day2)

I’m very pleased to say that researchers and scientists are really looking for a solution to encode and make sure that complex pipeline can be replicated. Replicating scientific experiments is always a good idea, and having a way to describe them in a programmatic way, so they can be given to a computer directly, its a big step forward.

If you ever wrote a pipeline, you need that thing can get messy. In the best case scenario you have a bash script wrapper, that calls some executable with some parameters, and it may take some arguments.

If it is very bad, you may have some custom perl (or python) scripts that call some custom bash script that relay on some hard-coded paths which then launch executable that can be run on only certain version of software on a certain type of cluster, with some compiled options.

And, unbelievable as it sounds, the second option is very common, and the number of custom software and script involved is very high.

However, it does not matter how complicated your pipeline is, how obscure the program you use are, or how many ad-hoc script you are using, you can wrap all of them and express them and share it using the CWL, hinging on custom docker images.

For example, take a look at this classic bwa+gatk pipeline for calling variant (you may have to make a user on the platform to see it. Do not worry, it is free). Even with more than 20 steps, all the software, parameters and computation environment can be tracked and most importantly reproduced.

Any software can be ported on this language and expressed, the only requirement is that you can run it on a linux environment, so you can dockerize it.

Have a go, we may get over these cowboys day, and start to reprouce results down to the byte!

2015 in review

2016

Happy New Year!

New Eve Years is upon us once more, and this is a good time to do the classic yearly review.

First of all this is a success story. Last year I’ve decided to write more, and I’ve actually managed to do it. We went from one post only in 2014 to a grand total of 19 posts this year. Not bad at all.

A very quick round down of the stats: my classic workhorse, the pull request rescue post is still going strong and it is the point of entry from google to this blog. This year a new entry has come along: speed up list intersection in python.  It proved to be quite  popular and it is standing its ground even if it is quite new. A lot of other posts this year have been relatively popular, like moving the blog to dokku and some dokku advise as well.

Lots of things have happened, both in my personal and work life. It’s a great time of changing, and new adventures are going to start very soon. As usual this blog will remain mostly about scientific and work related subjects, but I expect to write more about bioinformatics and docker in the future.

Last but not least, this is the generated annual report for the 2015.

2016 is looking very exciting, dense and very packed. I hope I can still write the odd post, but as usual we will see next year.

In the mean time, Happy New Year!

P.S.: Yep , Santa Claus and the snow will go away after the holiday, do not panic.

 

Why a password manager is a good idea

lost passwd

Do you remember your password for this site?

If you are like me, you tend to use a lot of different website and services on the net, with the consequent number of passwords.

There are a bunch of strategies which tend to be used to handle this situation:

  1. re-use the same password over and over. This is one of the most dangerous one
  2. use several passwords, usually with a decreasing security level. For example your super secure password for your email, and then less complicated and less secure passwords as soon you are for all the rest. Usually a pool of five or six passwords.
  3. refer to the trusted document sitting on your computer, with your passwords in clear text.

If any of this scenarios looks familiar, then it’s time to re-vamp what you are doing and change approach.

Let me introduce you clipperz:

Online password manager with client side encryption

Online password manager with client side encryption

Clipperz is an opensource online password manager which knows nothing about your data, and sports a client encryption system.

What does it mean?

It means that the encryption is done at client level (in your browsers, via Javascript), and then only the encrypted data are sent to the server to be stored. So if somebody hacks the servers, they will get some encrypted nonsense which they cannot decode without your passphrase

To sign up you need to pick a username and a passphrase. The only catch is, because clipperz does not know anything about you, there is no way to recover the passphrase. So if you forget it, it’s gone.

Once you have your account set up, after you have logged in, you can record any type of information that you want to keep secret and secure.

Classic entry for a new clipper record

Classic entry for a new clipper record

For example, if you have just created an account for a new website, you can record the url and the username and then for the password you used. If you want, you can use the password icon on the right, to generate a new random password. This is extremely handy, because all your password will be random, and if somebody will be able to get the passwords from this website, then you do not have to worry, but you just have to generate a new one and change it!

Once you have saved, that record will be available online for you from any device,  just go to clipperz again and log in. Additionally there is a Search and a Tagging system available, and also the possibility to take a read-only backup of clipperz on your computer.

It’s quite a long time that I’ve bitten the bullet, and started to use a password manager. I never regretted it. More over, I think clipperz is extremely good and I am extremely happy with it.

Take it for a ride, one passphrase to rule them all!

Recovering bitcoins from old wallets

To the moon and back?

To the moon and back?

Bitcoin, and the blockchain specifically, is a pretty cool technology. The price of a bitcoin, as shown in the above image, is still in flux. That’s euphemism for a bloody roller-coaster. ūüôā

Anyway, this post is about something connected, but not entirely the same. This post is about recovering some bitcoins which I had in an old wallet, and where I thought they would stay.

Some background info

When I was involved with coinduit¬†I used to have some bitcoins (some part of bitcoins, of course) on Mycelium wallet on my android mobile phone. Unfortunately my phon encountered a kind of strange problem: even if connected to the charger, the phone was unable to charge. This meant I had small amount of time to transfer all my bitcoins, and take a backup of my existing wallet. I tried to do the 12 words backup with Mycelium, but didn’t manage. However I’ve managed to export the private keys of my three accounts that I had created at that time…

What happened

Who has the private keys of a bitcoin address, owns the address, therefore keeping the private key secure is paramount. The private key is necessary to sign a transaction, which is the way the bitcoins do get transfer from one address to another one. Basically without the private key, you can’t move the bitcoins¬†from a bitcoin address.

So before my mobile ran out of juice, I’ve managed to transfer some coins on a new wallet I’ve just opened on blockchain.info. For some reason that I do not remember, I didn’t send them all of them, but some millibitcoins stayed behind. I think just after the transaction, my phone poweroff. That was basically the swan’s song for my phone.

I’ve sent the phone to be repaired, but as usual, they did a factory reset and everything that was stored on the phone, was gone.

Quick fast forward to today. I’ve got a new phone, a bit of time ago, and I’ve decided to see if I could recovered the coins.

It was super easy!

I have just scanned the three private keys into Mycelium, regaining all my bitcoins that were left there. As I said at the beginning, having the private key makes you the owner of these bitcoins, or at least gives you the power to move them to an address you control. So I have transferred the bitcoins from the old address to the new Hierarchical one that comes with the latest Mycelium.

After that, I’ve logged into blockchain.info, and sent the bitcoins from that address to the new one. Now all the bitcoins are once again on my device and I am in full control.

This time I’ve managed to back up the seed to recreate the Hierarchical Mycelium wallet, therefore next time I have a problem, I have just to recreate the address using the 12 random words and I’m sorted.

I’m using clipper.is¬†as password manager, to store all this details, so the solution is pretty secure.

So, yeah, pretty pleased with bitcoins and the ability of rescue them. Get some if you didn’t so far.

Packaging Neuronvisio for conda

New Neuronvisio website

New Neuronvisio website

Neuronvisio is a software that I wrote quite long time ago to visualize computational models of neurons in 3D.  At the point in time when I was actively developing it, few services and software were not existing:

  1. conda was not available (kickass package manager able to deal also with binary dependencies)
  2. read the docs were not avaiable (auto-updated docs on each commit)
  3. github pages didn’t have nice themes available (it was there, but you had to do all the heavy lifting, and I was hosting there the docs, which were updated manually.)

To be able to have a smooth way to release and use the software, I was using Paver as a management library for the package, which served very well until it broke, making Neuronvisio not installable via pip anymore. Therefore I’ve promised to myself that, as soon I had a little bit of time, I was going to restructure the thing and make Neuronvisio installable via conda, automatically pulling all the dependencies needed to have a proper environment working out of the box. Because it will be nice.

Read the docs and github pages

This one was relatevely easy. Neuronvisio docs were always built using sphinx, so host them on read the docs was going to be trivial. Therefore the idea was to point neuronvisio.org to the neuronvisio.rtfd.org and job was done.

Not so fast!

So in the classic yak shaving, which you can read here, or watch the gif below:

View post on imgur.com


Yak shaving: recursively solving problems, with the classic case where your last problem is miles away from where you have started.

It turns out that apex domains cannot point to subdomain (foo.com cannot resolve to zap.blah.com), because DNS protocol does not like it and the internet will burn (or email will get lost, which is pretty much the same problem), so you can only point a subdomain (zip.foo.com) to a subdomain (zap.blah.com).

Therefore my original idea, to use the sphinx generated website as entry point was not a possibility. I could still point neuronvisio.org to whatever I was hosting on the gh branch of the neuronvisio repo. It couldn’t be the docs, because I wanted them automatically updated, so I had to design some kind of presentation website for the software. As I said, Github Pages is now sporting some cool themes, so I’ve picked up one, and just used some bits from the intro page.

At the end of this, I had a readthedocs hook which was recreating the docs on the fly at each commit, without manual intervention required; a presentation website written in Markdown using githubpages infrastructure, everything hosted and responsive with the proper domain set in place. Note that I¬†even didn’t start on the package. Yeah \o/.

Creating the package

To create the conda package for Neuronvisio I had to create the meta.yaml and the build.sh. It was pretty easy to create given the fact Neuronvisio is a python package, and it was already using setup.py. The docs are good, and googling around, with a lot of test and try, I’ve got the package done in no (too much) time.

Solving the dependencies

Neuronvisio has a lots of dependencies, but most of them were already packaged for conda. The only big dependencies I was missing was the NEURON package and the Interview library. So I created a PR on the community maintained  conda-recipes repository. As you can see from the PR and the commit, this was not easy at all and it was super complicated.

It turned out to be impossible to make a proper package for neuron which works out of the box. What we’ve got so far is the support for python and interview out of the box, however not the hoc support. This is due to the way NEURON figures out the prefix of the hoc file at compilation time, and due to the re-location done by conda when the package is installed, this tend to differ and it’s not easy to be patched.

Anyway, there is a workaround, which is to export $NEURONHOME environment variable and you are good to go.

After all this a new shiny release of Neuronvisio is available (0.9.1), which it’s goal is to make the installation a bit easier, and get all the dependencies ready to go with one command.

Happy installing.

Running Google AdSense for 4 months, a report

I always was curious to see how running ads on my blog will turn out, and how much money I was going to make, and how the amount of traffic received would impact these numbers.

So, I decided to put them on after I moved the blog to my own server. I went for the classic,¬†and I have installed google adsense plugin. Due to the fact that now I pay the server, I was wondering if I could bring the blog into a self-sufficient state, i.e. where it was making enough money to pay its own server. (The server is a digital ocean 5$/month, which runs also other little hobby projects, so it’s not that expensive).

Let’s see the numbers

visitors-train-of-thoughts

Visitors on Train of Thoughts on the same period covered by the ads

As you can see from the graphs, this blog scores around 120 sessions per day, with mostly new user coming from google, with a massive drop during the WE.

Most of this traffic is composed by technical users, who are looking for one post in specific. They tend to read it, and then they go about their own business.

In the same period this is what the Google AdSense income looks like:

googleadsenseclicks

Google AdSense gain during the same period

The estimated earnings is what is interesting: running this ads on this blog has summed up to a whopping £6.32. Considering that google will only pay when £60 is reached, I can expect to see the payments in more or less 3 years.  W00t?

I was wondering if this is because everybody runs AdBlock, and the ads are always shielded. To discover this I have added a plugin to keep track of this, and you can see below how it looks like:

adblocks_notify_stats

40% with AdBlock on! And no one deactivates it!

From the data I’ve got, it seems there are 40% of the people with AdBlock on, and it seems no-one, so far, read my message and decided to white list the website. It can be concluded that 60 sessions (the one withoput AdBlock running), achieved only during peak days, do not bring any kind of decent income.

I had ads on top of the post, on the sidebar and also between the posts. They were very prominent and were really annoying, but I thought they were going to pay for the server, so they were a necessary evil. I guess we can conclude that this is not the case.

Different strategy

Given this situation, I’ve decided to slash the ads severely and leave only one in the sidebar, and with colour that integrates in the site and it does not look too much alien, hopefully.

I do not expect people to click on it, or to increase the revenue, however I am more happy about the state of the blog, with less clutter and visual noise adding up, and a more gratifying and pleasant navigation. We’ll see how it goes.

Happy reading!

How to transfer files locally

file transfer

Transfer files is still hard in 2015. And slow

When I have decided to go on with my “new computer”, I had the classic problem: transfer all my data from the old computer to the new one.

So I’ve installed a SSH server on my old computer, both computers where connected on the same wireless network, therefore I have launched an rsync to copy recursively all my home from my old laptop to the new one. Not so fast, cowboy!

Unfortunately this did not work as expected, for a series of reasons:

  • the packets¬†were continuously dropped by the router: it seems the route to host was not available at certain time, with rsync stalling
  • re-launching the command was overwriting all the files on my home directory, however my old computer was running a 12.04 LTS, while this one is on a 14.04, hence every time a program was upgrading some of the preference, it was overwritten by rsync. And then, as soon the program was launched, the files were changed again.

So I needed a different approach.

I plugged the two computers, created two wired connections, given two diffent Ips to the two computer, and used that to do the transfer (once I’ve switched off the wireless.) Win!!

Details how to do it are on this Stackoverflow answer, so I won’t repeat them here.

Go ahead and transfer your files fast!

© 2016 Train of Thoughts

Theme by Anders NorénUp ↑

By continuing to use the site (scrolling or clicking counts), you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close